We’ve noted many times over the years how inconsistent pneumatic features are in sauropod vertebra. Fossae and formamina vary between individuals of the same species, and along the spinal column, and even between the sides of individual vertebrae. Here’s an example that we touched on in Wedel and Taylor (2013), but which is seen in all its glory here:

Taylor and Wedel (2021: Figure 5). Giraffatitan brancai tail MB.R.5000, part of the mounted skeleton at the Museum für Naturkunde Berlin. Caudal vertebrae 24–26 in left lateral view. While caudal 26 has no pneumatic features, caudal 25 has two distinct pneumatic fossae, likely excavated around two distinct vascular foramina carrying an artery and a vein. Caudal 24 is more shallowly excavated than 25, but may also exhibit two separate fossae.

But bone is usually the least variable material in the vertebrate body. Muscles vary more, nerves more again, and blood vessels most of all. So why are the vertebrae of sauropods so much more variable than other bones?

Our new paper, published today (Taylor and Wedel 2021) proposes an answer! Please read it for the details, but here’s the summary:

  • Early in ontogenly, the blood supply to vertebrae comes from arteries that initially served the spinal cord, penetrating the bone of the neural canal.
  • Later in ontegeny, additional arteries penetrate the centra, leaving vascular foramina (small holes carrying blood vessels).
  • This hand-off does not always run to completion, due to the variability of blood vessels.
  • In extant birds, when pneumatic diverticula enter the bone they do so via vascular foramina, alongside blood vessels.
  • The same was probaby true in sauropods.
  • So in vertebrae that got all their blood supply from vascular foramina in the neural canal, diverticula were unable to enter the centra from the outside.
  • So those centra were never pneumatized from the outside, and no externally visible pneumatic cavities were formed.

Somehow that pretty straightforward argument ended up running to eleven pages. I guess that’s what you get when you reference your thoughts thoroughly, illustrate them in detail, and discuss the implications. But the heart of the paper is that little bullet-list.

Taylor and Wedel (2021: Figure 6). Domestic duck Anas platyrhynchos, dorsal vertebrae 2–7 in left lateral view. Note that the two anteriormost vertebrae (D2 and D3) each have a shallow pneumatic fossa penetrated by numerous small foramina.

(What is the relevance of these duck dorsals? You will need to read the discussion in the paper to find out!)

Our choice of publication venue

The world moves fast. It’s strange to think that only eleven years ago my Brachiosaurus revision (Taylor 2009) was in the Journal of Vertebrate Palaeontology, a journal that now feels very retro. Since then, Matt and I have both published several times in PeerJ, which we love. More recently, we’ve been posting preprints of our papers — and indeed I have three papers stalled in peer-review revisions that are all available as preprints (two Taylor and Wedels and a single sole-authored one). But this time we’re pushing on even further into the Shiny Digital Future.

We’ve published at Qeios. (It’s pronounced “chaos”, but the site doesn’t tell you that; I discovered it on Twitter.) If you’ve not heard of it — I was only very vaguely aware of it myself until this evening — it runs on the same model as the better known F1000 Research, with this very important difference: it’s free. Also, it looks rather slicker.

That model is: publish first, then filter. This is the opposite of the traditional scholarly publishing flow where you filter first — by peer reviewers erecting a series of obstacles to getting your work out — and only after negotiating that course to do get to see your work published. At Qeios, you go right ahead and publish: it’s available right off the bat, but clearly marked as awaiting peer-review:

And then it undergoes review. Who reviews it? Anyone! Ideally, of course, people with some expertise in the relevant fields. We can then post any number of revised versions in response to the reviews — each revision having its own DOI and being fixed and permanent.

How will this work out? We don’t know. It is, in part, an experiment. What will make it work — what will impute credibility to our paper — is good, solid reviews. So if you have any relevant expertise, we do invite you to get over there and write a review.

And finally …

Matt noted that I first sent him the link to the Qeios site at 7:44 pm my time. I think that was the first time he’d heard of it. He and I had plenty of back and forth on where to publish this paper before I pushed on and did it at Qeios. And I tweeted that our paper was available for review at 8:44 — one hour exactly after Matt learned that the venue existed. Now here we are at 12:04 my time, three hours and 20 minutes later, and it’s already been viewed 126 times and downloaded 60 times. I think that’s pretty awesome.


  • Taylor, Michael P. 2009. A re-evaluation of Brachiosaurus altithorax Riggs 1903 (Dinosauria, Sauropoda) and its generic separation from Giraffatitan brancai (Janensch 1914). Journal of Vertebrate Paleontology 29(3):787-806. [PDF]
  • Taylor, Michael P., and Mathew J. Wedel. 2021. Why is vertebral pneumaticity in sauropod dinosaurs so variable? Qeios 1G6J3Q. doi: 10.32388/1G6J3Q [PDF]
  • Wedel, Mathew J., and Michael P. Taylor 2013b. Caudal pneumaticity and pneumatic hiatuses in the sauropod dinosaurs Giraffatitan and Apatosaurus. PLOS ONE 8(10):e78213. 14 pages. doi: 10.1371/journal.pone.0078213 [PDF]

Down in flames

August 25, 2018

I first encountered Larry Niven’s story/essay “Down in Flames” in the collection N-Space in high school. This was after I’d read Ringworld and most of Niven’s Known Space stories, so by the time I got to “Down in Flames” I had the context to get it. (You can read the whole thing for free here.)

Here’s the idea, from near the start:

On January 14, 1968, Norman Spinrad and I were at a party thrown by Tom & Terry Pinckard. We were filling coffee cups when Spinny started this whole thing.

“You ought to drop the known space series,” he said. “You’ll get stale.” (Quotes are not necessarily dead accurate.) I explained that I was writing stories outside the “known space” history, and that I would give up the series as soon as I ran out of things to say within its framework. Which would be soon.

“Then why don’t you write a novel that tears it to shreds? Don’t just abandon known space. Destroy it!”

“But how?” (I never asked why. Norman and I think alike in some ways.)

The rest of the piece is just working out the details.

“Down in Flames” brain-wormed me. Other than Ray Bradbury’s “A Sound of Thunder” I doubt if there is another short story I’ve read as many times. Mike once described the act of building something complex and beautiful and then destroying it as “magnificently profligate”, and that’s the exact quality of “Down in Flames” that appeals to me.

I also think it is a terrific* exercise for everyone who is a scientist, or who aspires to be one.

* In both the modern sense of “wonderful” and the archaic sense of “causing terror”.

Seriously, try it. Grab a piece of paper (or open a new doc, or whatever) and write down the ideas you’ve had that you hold most dear. And then imagine what it would take for all of them to be wrong. (When teams and organizations do this for their own futures, it’s called a pre-mortem, and there’s a whole managerially-oriented literature on it. I’d read “Down in Flames” instead.)

It feels like this! Borrowed from here.

Here are some questions to help you along:

  • Which of your chains of reasoning admit more than one end-point? If none of them might lead other places, then either you are the most amazing genius of all time (even Newton and Einstein made mistakes), or you are way behind the cutting edge, and your apparent flawlessness comes from working on things that are already settled.
  • If there is a line of evidence that could potentially falsify your pet hypothesis, have you checked it? Have you drawn any attention to it? Or have you gracefully elided it from your discussions in hopes that no-one will notice, at least until after you’re dead?
  • If there’s no line of evidence that could falsify your pet hypothesis, are you actually doing science?
  • Which of your own hypotheses do you have an emotional investment in?
  • Are there findings from a rival research team (real or imagined) that you would not be happy to see published, if they were accurate?
  • Which hypotheses do you not agree with, that you would be most dismayed to see proven correct?

[And yes, Karl, I know that according to some pedants hypotheses are never ‘proven’. It’s a theoretical exercise already, so just pretend they can be!]

I’ll close with one of my favorite quotes, originally published in a couple of tweets by Angus Johnson in May of 2017 (also archived here):

If skepticism means anything it means skepticism about the things you WANT to be true. It’s easy to be a skeptic about others’ views. Embracing a set of claims just because it happens to fit your priors doesn’t make you a skeptic. It makes you a rube, a mark, a schnook.

So, don’t be that rube. Burn down your house of ideas – or at least, mentally sift through the rubble and ashes and imagine how it might have burned down. And then be honest about that, minimally with yourself, and ideally with the world.

If you’re a true intellectual badass, blog the results. I will. It’s not fair to give you all homework – painful homework – and not take the medicine myself, so I’m going to do a “Down in Flames” on my whole oeuvre in the next a future post. Stay tuned!

Today, available for the first time, you can read my 2004 paper A survey of dinosaur diversity by clade, age, place of discovery and year of description. It’s freely available (CC By 4.0) as a PeerJ Preprint. It’s one of those papers that does exactly what it says on the tin — you should be able to find some interesting patterns in the diversity of your own favourite dinosaur group.

Fig. 1. Breakdown of dinosaur diversity by phylogeny. The number of genera included in each clade is indicated in parentheses. Non-terminal clades additionally have, in square brackets, the number of included genera that are not also included in one of the figured subclades. For example, there are 63 theropods that are neither carnosaurs nor coelurosaurs. The thickness of the lines is proportional to the number of genera in the clades they represent.

Taylor (2014 for 2004), Figure 1. Breakdown of dinosaur diversity by phylogeny. The number of genera included in each clade is indicated in parentheses. Non-terminal clades additionally have, in square brackets, the number of included genera that are not also included in one of the figured subclades. For example, there are 63 theropods that are neither carnosaurs nor coelurosaurs. The thickness of the lines is proportional to the number of genera in the clades they represent.

“But Mike”, you say, “you wrote this thing ten years ago?”

Yes. It’s actually the first scientific paper I ever wrote (bar some scraps of computer science) beginning in 2003. It’s so old that all the illustrations are grey-scale. I submitted it to Acta Palaeontologica Polonica way back on on 24 October 2004 (three double-spaced hard-copies in the post!) , but it was rejected without review. I was subsequently able to publish a greatly truncated version (Taylor 2006) in the proceedings of the 2006 Symposium on Mesozoic Terrestrial Ecosystems, but that was only one tenth the length of the full manuscript — much potentially valuable information was lost.

My finally posting this comes (as so many things seem to) from a conversation with Matt. Off work sick, he’d been amusing himself by re-reading old SV-POW! posts (yes, we do this). He was struck by my exhortation in Tutorial 14: “do not ever give a conference talk without immediately transcribing your slides into a manuscript”. He bemoaned how bad he’s been at following that advice, and I had to admit I’ve done no better, listing a sequence of old my SVPCA talks that have still never been published as papers.

The oldest of these was my 2004 presentation on dinosaur diversity. Commenting on this, I wrote in email: “OK, I got the MTE four-pager out of this, but the talk was distilled from a 40ish-page manuscript that was never published and never will be.” Quick as a flash, Matt replied:

If I had written this and sent it to you, you’d tell me to put it online and blog about how I went from idea to long paper to talk to short paper, to illuminate the process of science.

And of course he was right — hence this preprint.

Fig. 2. Breakdown of dinosaurian diversity by high-level taxa. "Other sauropodomorphs" are the "prosauropods" sensu lato. "Other theropods" include coelophysoids, neoceratosaurs, torvosaurs (= megalosaurs) and spinosaurs. "Other ornithischians" are basal forms, including heterodontosaurs and those that fall into Marginocephalia or Thyreophora but not into a figured subclade.

Taylor (2014 for 2004), Figure 2. Breakdown of dinosaurian diversity by high-level taxa. “Other sauropodomorphs” are the “prosauropods” sensu lato. “Other theropods” include coelophysoids, neoceratosaurs, torvosaurs (= megalosaurs) and spinosaurs. “Other ornithischians” are basal forms, including heterodontosaurs and those that fall into Marginocephalia or Thyreophora but not into a figured subclade.

I will never update this manuscript, as it’s based on a now wildly outdated database and I have too much else happening. (For one thing, I really ought to get around to finishing up the paper based on my 2005 SVPCA talk!) So in a sense it’s odd to call it a “pre-print” — it’s not pre anything.

Despite the data being well out of date, this manuscript still contains much that is (I think) of interest, and my sense is that the ratios of taxon counts, if not the absolute numbers, are still pretty accurate.

I don’t expect ever to submit a version of this to a journal, so this can be considered the final and definitive version.



After the authors’ own work, the biggest contribution to a published paper is the reviews provided, gratis, by peers. When peer-review works as it’s supposed to, they add significant value to the final paper. But the actual reviews are never seen by anyone except the authors and the handling editor.

This is bad for several reasons.

First, good reviewers don’t get the credit they deserve. That’s unfair on those who do a good job — who generously invest a lot of time and effort in others’ work.

Second, bad reviewers don’t get the blame they deserve. That leaves them free to act in bad faith: blocking papers by people they don’t like, or whose work is critical of their own; or just doing a completely inadequate job. Because there are no negative consequences for doing a bad job, people have no external incentive to straighten up and fly right.

Third, the effort that goes into reviewing is largely wasted. Often the reviews themselves are significant pieces of work (that’s certainly true when I’m the one giving the review) and the wider community could benefit from seeing them. Frequently reviews contain extended discussion, not only of the paper’s subject matter but of scientific philosophy such as approaches to taxonomy or narrative structure.

Fourth, editors’ decisions remain unexplained. Most editors handle manucripts efficiently and fairly, but there are cases when this isn’t the case — as for example when I was one of three reviewers who wholeheartedly recommended acceptance but the editor rejected the paper. Even discussing that situation was difficult, because the reviews in question were not available for the world to read.

Fifth, and more general than any of the above, the reviewing process is opaque to the world. In times past, logistical reasons such as lack of space in printed journals meant that the sausage-machine approach to the review process was the only feasible one: no-one wants to see what goes into the machine or what goes on inside, we only want the final product. But we live in an increasingly open world, and consensus is that pretty much all processes benefit from openness.

There are various initiatives under way to change the legacy system of reviewing, including F1000 Research and the eLife decision-letter system. But at the moment only a small minority of papers are submitted to such venues.

What to do about the others?

And so I found myself wondering … what would happen if I just unilaterally posted the reviews I receive? I already make pages on this site for each of my published papers (example): it would be easy to extend those pages by also adding:

  • The submitted version of the manuscript
  • All the reviews I received
  • The editor’s decision letter
  • My response letter to the editor
  • The final published paper.

I know this is “not done”. My question is: why not? Is there an actual reason, other than inertia? Wouldn’t we all be better off if this was standard operating procedure?

[Note that this is orthogonal to reviewer anonymity. As it happens, I think that is also a bad thing, but it’s independent of what I’m proposing here. I could post an unsigned review as-is, without revealing who wrote it even if I knew.]

We know that most academic journals and edited volumes ask authors to sign a copyright transfer agreement before proceeding with publication. When this is done, the publisher becomes the owner of the paper; the author may retain some rights according to the grace or otherwise of the publisher.

Plenty of authors have rightly railed against this land-grab, which publishers have been quite unable to justify. On occasion we’ve found ways to avoid the transfer, including the excellent structured approach that is the SPARC Author Addendum and my tactic of transferring copyright to my wife.

Works produced by the U.S. Federal Government are not protected by copyright. For example, papers written by Bill Parker as part of his work at Petrified Forest National Park are in the public domain.

Journals know this, and have clauses in their copyright transfer agreements to deal with it. For example, Elsevier’s template agreement has a box to check that says “I am a US Government employee and there is no copyright to transfer”, and the publishing agreement itself reads as follows (emphasis added):

Assignment of publishing rights
I hereby assign to <Copyright owner> the copyright in the manuscript identified above (government authors not electing to transfer agree to assign a non-exclusive licence) and any supplemental tables, illustrations or other information submitted therewith that are intended for publication as part of or as a supplement to the manuscript (the “Article”) in all forms and media (whether now known or hereafter developed), throughout the world, in all languages, for the full term of copyright, effective when and if the article is accepted for publication.

So journals and publishers are already set up to deal with public domain works that have no copyright. And that made me wonder why this option should be restricted to U.S. Federal employees.

What would happen if I just unilaterally place my manuscript in the public domain before submitting it? (This is easy to do: you can use the Creative Commons CC0 tool.)

Once I’d done that, I would be unable to sign a copyright transfer agreement. Not merely unwilling — I wouldn’t need to argue with publishers, “Oh, I don’t want to sign that”. It would be simpler than this. It’s would just be “There is no copyright to transfer”.

What would publishers say?

What could they say?

“We only publish public-domain works if they were written by U.S. federal employees”?

An interesting conversation arose in the comments to Matt’s last post — interesting to me, at least, but then since I wrote much of it, I am biased.  I think it merits promotion to its own post, though.  Paul Graham, among many others, has written about how one of the most important reasons to write about a subject is that the process of doing so helps you work through exactly what you think about it.  And that is certainly what’s happening to me in this series of Open Access posts.

Dramatis personae

Liz Smith: Director of Global Internal Communications at Elsevier
Mike Taylor: me, your co-host here at SV-POW!
Andy Farke: palaeontologist, ceratopsian lover, and PLoS ONE volunteer academic editor


In a long and interesting comment, Liz wrote (among much else):

This is where there seems to be deliberate obtuseness. Sticking a single PDF up online is easy. But there are millions of papers published every year. It takes a hell of a lot of people and resources to make that happen. You can’t just sling it online and hope somebody can find it. The internet doesn’t happen by magic.

And I replied:

Actually, you can and I do. That is exactly how the Internet works. I don’t have to do anything special to make sure my papers are found — Google and other search engines pick them up, just like they do everything. So to pick an example at random, if you search for brachiosaurus re-evaluation, the very first hit will be my self-hosted PDF of my 2009 JVP paper on that subject. [Correction: I now see that it’s the third hit; the PDF of the correction is top.] Similarly, search for xenoposeidon pdf and the top hit is — get ready for a shock! — my self-hosted PDF of my 2007 Palaeontology paper on that subject.

So in fact, this is a fine demonstration of just how obsolete much of the work that publishers do has now become — all that indexing, abstracting and aggregation, work that used to be very important, but which is now done much faster, much better, for free, by computers and networks.

Really: what advantages accrue to me in having my Xenoposeidon paper available on Wiley’s site as well as mine? [It’s paywalled on their site, so useless to 99% of potential visitors, but ignore that for now. Let’s pretend it’s freely available.] What else does that get me that Google’s indexing of my self-hosted PDF doesn’t?

Liz is quite rightly taking a break over the weekend, so she’s not yet replied to this; but Andy weighed in with some important points:

To address your final statement, I see three main advantages to having a PDF on a publisher’s site, rather than just a personal web page (this follows some of our Twitter discussion the other day, but I post it here just to have it in an alternative forum):

1) Greater permanence. Personal web pages (even with the best of intentions) have a history of non-permanence; there is no guarantee your site will be around 40 or 50 years from now. Just ask my Geocities page from 1998. Of course, there also is no guarantee that Wiley’s website will be around in 2073 either, but I think it’s safe to say there’s a greater likelihood that it will be around in some incarnation than a personal website.

2) Document security. By putting archiving in the hands of the authors, there is little to prevent them from editing out embarrassing details, or adding in stuff they wanted published but the reviewers told them to take out, or whatever. I’m not saying this is something that most people would do, but it is a risk of not having an “official” copy somewhere.

3) Combating author laziness. You have an excellent track record of making your work available, but most other authors do not, for various reasons.

It is also important to note that none of the above requirements needs a commercial publisher – in fact, they would arguably be better served by taking them out of the commercial sector. My main point is that self-hosting, although a short-term solution for distribution and archival, is not a long-term one.

Finally, just as a minor pedantic note, search results depend greatly on the search engine used. Baidu – probably the most popular search engine in China – doesn’t give your self-hosted PDF anywhere in its three pages of search results (neither does it give Wiley’s version, though).

And now, here is my long reply — the one that, when I’d finished it, made me want to post this as an article:

On permanence, there are a few things to say. One is that with the rate of mergers, web-site “upgrades” and suchlike I am actually far from confident that (say) the Wiley URL for my Xenoposeidon paper will last longer than my own. In fact, let’s make it a challenge! :-) If theirs goes away, you buy me a beer; if mine does, I buy you one! But I admit that, as an IT professional who’s been running a personal website since the 1990s — no Geocities for me! — I am not a typical case.

But the more important point is that it doesn’t matter. The Web doesn’t actually run on permanent addresses, it runs on what gets indexed. If I deleted my Xenoposeidon PDF today and put it up somewhere else — say, directly on SV-POW! — within a few days it would be indexed again, and coming out at or near the top of search-engine results. Librarians and publishers used to have a very important curation role — abstracting and indexing and all that — but the main reason they keep doing these things now is habit.

And that’s because of the wonderful loosely coupled nature of the Internet. Back when people first started posting research papers on the web, there were no search engines — CERN, famously, maintained a list of all the world’s web-sites. Search engines and crawlers as we know them today were never part of the original vision of the web: they were invented and put together from spare parts. And that is the glory of the open web. The people at Yahoo and AltaVista and Google didn’t need anyone’s permission to start crawling and indexing — they didn’t need to sign up to someone’s Developer Partnership Program and sign a non-disclosure form before they were allowed to see the API documentation, and then apply for an API Key that is good for up to 100 accesses per day. All these encumberances apply when you try to access data in publishers’ silos (trust me: my day-job employers have just spent literally months trying to suck the information out of Elsevier that is necessary to use their crappy 2001-era SOAP-based web services to search metadata. Not even content.) And this is why I can’t get remotely excited about things like ScienceDirect and Scopus. Walled gardens can give us some specific functionality, sure, but they will always be limited by what the vendor thinks of, and what the vendor can turn a profit on. Whereas if you just shove things up on the open web, anyone can do anything with them.

With that said, your point about document security is well made — we do need some system for preventing people from tampering with versions of record. Perhaps something along the lines of the DOI register maintaining an MD5 checksum of the version-of-record PDF?

You are also right that not all authors will bother to post their PDFs — though frankly, heaven alone knows why not, when it takes five minutes to do something that will triple the accessibility of work you’ve spent a year on. This seems like an argument for repositories (whether institutional or subject-based) and mandatory deposition — e.g. as a condition of a grant.

Is that the same as the Green OA route? No, I want to see version-of-record PDFs reposited, not accepted manuscripts — for precisely the anti-tampering reason you mention above, among other reasons. Green OA is much, much better than nothing. But it’s not the real thing.

Finally: if Baidu lists neither my self-hosted Xenoposeidon PDF or Wiley’s version anywhere in its first three pages of search results, then it is Just Plain Broken. I can’t worry about the existence of broken tools. Someone will make a better one and knock it off its perch, just like Google did to AltaVista.

And there, for the moment, matters stand.  I’m sure that Liz and Andy, and hopefully others, will have more to say.

One of the things I like about this is the way that a discussion that was originally about publisher behaviour mutated into one on the nature of the Open Web — really, where we ended up is nothing to do with Open Access per se.  The bottom line is that free systems (and here I mean free-as-in-freedom, not zero-cost) don’t just open up more opportunities than proprietary ones, they open up more kinds of opportunities, including all kinds of ideas that the original group never even thought of.

And that, really — bringing it all back to where we started — is why I care about Open Access.  Full, BOAI-compliant, Open Access.  Not just so that people can read papers at zero cost (important though that is), but so that we and a million other groups around the world can use them to build things that we haven’t even thought of yet — things as far advanced beyond the current state of the art as Google is over CERN’s old static list of web-sites.