[Note: Mike asked me to scrape a couple of comments on his last post – this one and this one – and turn them into a post of their own. I’ve edited them lightly to hopefully improve the flow, but I’ve tried not to tinker with the guts.]

This is the fourth in a series of posts on how researchers might better be evaluated and compared. In the first post, Mike introduced his new paper and described the scope and importance of the problem. Then in the next post, he introduced the idea of the LWM, or Less Wrong Metric, and the basic mathemetical framework for calculating LWMs. Most recently, Mike talked about choosing parameters for the LWM, and drilled down to a fundamental question: (how) do we identify good research?

Let me say up front that I am fully convicted about the problem of evaluating researchers fairly. It is a question of direct and timely importance to me. I serve on the Promotion & Tenure committees of two colleges at Western University of Health Sciences, and I want to make good decisions that can be backed up with evidence. But anyone who has been in academia for long knows of people who have had their careers mangled, by getting caught in institutional machinery that is not well-suited for fairly evaluating scholarship. So I desperately want better metrics to catch on, to improve my own situation and those of researchers everywhere.

For all of those reasons and more, I admire the work that Mike has done in conceiving the LWM. But I’m pretty pessimistic about its future.

I think there is a widespread misapprehension that we got here because people and institutions were looking for good metrics, like the LWM, and we ended up with things like impact factors and citation counts because no-one had thought up anything better. Implying a temporal sequence of:

1. Deliberately looking for metrics to evaluate researchers.
2. Finding some.
3. Trying to improve those metrics, or replace them with better ones.

I’m pretty sure this is exactly backwards: the metrics that we use to evaluate researchers are mostly simple – easy to explain, easy to count (the hanky-panky behind impact factors notwithstanding) – and therefore they spread like wildfire, and therefore they became used in evaluation. Implying a very different sequence:

1. A metric is invented, often for a reason completely unrelated to evaluating researchers (impact factors started out as a way for librarians to rank journals, not for administration to rank faculty!).
2. Because a metric is simple, it becomes widespread.
3. Because a metric is both simple and widespread, it makes it easy to compare people in wildly different circumstances (whether or not that comparison is valid or defensible!), so it rapidly evolves from being trivia about a researcher, to being a defining character of a researcher – at least when it comes to institutional evaluation.

If that’s true, then any metric aimed for wide-scale adoption needs to be as simple as possible. I can explain the h-index or i10 index in one sentence. “Citation count” is self-explanatory. The fundamentals of the impact factor can be grasped in about 30 seconds, and even the complicated backstory can be conveyed in about 5 minutes.

In addition to being simple, the metric needs to work the same way across institutions and disciplines. I can compare my h-index with that of an endowed chair at Cambridge, a curator at a small regional museum, and a postdoc at Podunk State, and it Just Works without any tinkering or subjective decisions on the part of the user (other than What Counts – but that affects all metrics dealing with publications, so no one metric is better off than any other on that score).

I fear that the LWM as conceived in Taylor (2016) is doomed, for the following reasons:

  • It’s too complex. It would probably be doomed if it had just a single term with a constant and an exponent (which I realize would defeat the purpose of having either a constant or an exponent), because that’s more math than either an impact factor or an h-index requires (perceptively, anyway – in the real world, most people’s eyes glaze over when the exponents come out).
  • Worse, it requires loads of subjective decisions and assigning importance on the part of the users.
  • And fatally, it would require a mountain of committee work to sort that out. I doubt if I could get the faculty in just one department to agree on a set of terms, constants, and exponents for the LWM, much less a college, much less a university, much less all of the universities, museums, government and private labs, and other places where research is done. And without the promise of universal applicability, there’s no incentive for any institution to put itself through the hell of work it would take to implement.

Really, the only way I think the LWM could get into place is by fiat, by a government body. If the EPA comes up with a more complicated but also more accurate way to measure, say, airborne particle output from car exhausts, they can theoretically say to the auto industry, “Meet this standard or stop selling cars in the US” (I know there’s a lot more legislative and legal push and pull than that, but it’s at least possible). And such a standard might be adopted globally, either because it’s a good idea so it spreads, or because the US strong-arms other countries into following suit.

Even if I trusted the US Department of Education to fill in all of the blanks for an LWM, I don’t know that they’d have the same leverage to get it adopted. I doubt that the DofE has enough sway to get it adopted even across all of the educational institutions. Who would want that fight, for such a nebulous pay-off? And even if it could be successfully inflicted on educational institutions (which sounds negative, but that’s precisely how the institutions would see it), what about the numerous and in some cases well-funded research labs and museums that don’t fall under the DofE’s purview? And that’s just in the US. The culture of higher education and scholarship varies a lot among countries. Which may be why the one-size-fits-all solutions suck – I am starting to wonder if a metric needs to be broken, to be globally applicable.

The problem here is that the user base is so diverse that the only way metrics get adopted is voluntarily. So the challenge for any LWM is to be:

  1. Better than existing metrics – this is the easy part – and,
  2. Simple enough to be both easily grasped, and applied with minimal effort. In Malcolm Gladwell Tipping Point terms, it needs to be “sticky”. Although a better adjective for passage through the intestines of academia might be “smooth” – that is, having no rough edges, like exponents or overtly subjective decisions*, that would cause it to snag.

* Calculating an impact factor involves plenty of subjective decisions, but it has the advantages that (a) the users can pretend otherwise, because (b) ISI does the ‘work’ for them.

At least from my point of view, the LWM as Mike has conceived it is awesome and possibly unimprovable on the first point (in that practically any other metric could be seen as a degenerate case of the LWM), but dismal and possibly pessimal on the second one, in that it requires mounds of subjective decision-making to work at all. You can’t even get a default number and then iteratively improve it without investing heavily in advance.

An interesting thought experiment would be to approach the problem from the other side: invent as many new simple metrics as possible, and then see if any of them offer advantages over the existing ones. Although I have a feeling that people are already working on that, and have been for some time.

Simple, broken metrics like impact factor are the prions of scholarship. Yes, viruses are more versatile and cells more versatile still, by orders of magnitude, but compared to prions, cells take an awesome amount of effort to build and maintain. If you just want to infect someone and you don’t care how, prions are very hard to beat. And they’re so subtle in their machinations that we only became aware of them comparatively recently – much like the emerging problems with “classical” (e.g., non-alt) metrics.

I’d love to be wrong about all of this. I proposed the strongest criticism of the LWM I could think of, in hopes that someone would come along and tear it down. Please start swinging.

You’ll remember that in the last installment (before Matt got distracted and wrote about archosaur urine), I proposed a general schema for aggregating scores in several metrics, terming the result an LWM or Less Wrong Metric. Given a set of n metrics that we have scores for, we introduce a set of n exponents ei which determine how we scale each kind of score as it increases, and a set of n factors ki which determine how heavily we weight each scaled score. Then we sum the scaled results:

LWM = k1·x1e1 + k2·x2e2 + … + kn·xnen

“That’s all very well”, you may ask, “But how do we choose the parameters?”

Here’s what I proposed in the paper:

One approach would be to start with subjective assessments of the scores of a body of researchers – perhaps derived from the faculty of a university confidentially assessing each other. Given a good-sized set of such assessments, together with the known values of the metrics x1, x2xn for each researcher, techniques such as simulated annealing can be used to derive the values of the parameters k1, k2kn and e1, e2en that yield an LWM formula best matching the subjective assessments.

Where the results of such an exercise yield a formula whose results seem subjectively wrong, this might flag a need to add new metrics to the LWM formula: for example, a researcher might be more highly regarded than her LWM score indicates because of her fine record of supervising doctoral students who go on to do well, indicating that some measure of this quality should be included in the LWM calculation.

I think as a general approach that is OK: start with a corpus of well understood researchers, or papers, whose value we’ve already judged a priori by some means; then pick the parameters that best approximate that judgement; and let those parameters control future automated judgements.

The problem, really, is how we make that initial judgement. In the scenario I originally proposed, where say the 50 members of a department each assign a confidential numeric score to all the others, you can rely to some degree on the wisdom of crowds to give a reasonable judgement. But I don’t know how politically difficult it would be to conduct such an exercise. Even if the individual scorers were anonymised, the person collating the data would know the total scores awarded to each person, and it’s not hard to imagine that data being abused. In fact, it’s hard to imagine it not being abused.

In other situations, the value of the subjective judgement may be close to zero anyway. Suppose we wanted to come up with an LWM that indicates how good a given piece of research is. We choose LWM parameters based on the scores that a panel of experts assign to a corpus of existing papers, and derive our parameters from that. But we know that experts are really bad at assessing the quality of research. So what would our carefully parameterised LWM be approximating? Only the flawed judgement of flawed experts.

Perhaps this points to an even more fundamental problem: do we even know what “good research” looks like?

It’s a serious question. We all know that “research published in high-Impact Factor journals” is not the same thing as good research. We know that “research with a lot of citations” is not the same thing as good research. For that matter, “research that results in a medical breakthrough” is not necessarily the same thing as good research. As the new paper points out:

If two researchers run equally replicable tests of similar rigour and statistical power on two sets of compounds, but one of them happens to have in her batch a compound that turns out to have useful properties, should her work be credited more highly than the similar work of her colleague?

What, then? Are we left only with completely objective measurements, such as statistical power, adherance to the COPE code of conduct, open-access status, or indeed correctness of spelling?

If we accept that (and I am not arguing that we should, at least not yet), then I suppose we don’t even need an LWM for research papers. We can just count these objective measures and call it done.

I really don’t know what my conclusions are here. Can anyone help me out?

I said last time that my new paper on Better ways to evaluate research and researchers proposes a family of Less Wrong Metrics, or LWMs for short, which I think would at least be an improvement on the present ubiquitous use of impact factors and H-indexes.

What is an LWM? Let me quote the paper:

The Altmetrics Manifesto envisages no single replacement for any of the metrics presently in use, but instead a palette of different metrics laid out together. Administrators are invited to consider all of them in concert. For example, in evaluating a researcher for tenure, one might consider H-index alongside other metrics such as number of trials registered, number of manuscripts handled as an editor, number of peer-reviews submitted, total hit-count of posts on academic blogs, number of Twitter followers and Facebook friends, invited conference presentations, and potentially many other dimensions.

In practice, it may be inevitable that overworked administrators will seek the simplicity of a single metric that summarises all of these.

This is a key problem of the world we actually live in. We often bemoan that fact that people evaluating research will apparently do almost anything than actually read the research. (To paraphrase Dave Barry, these are important, busy people who can’t afford to fritter away their time in competently and diligently doing their job.) There may be good reasons for this; there may only be bad reasons. But what we know for sure is that, for good reasons or bad, administrators often do want a single number. They want it so badly that they will seize on the first number that comes their way, even if it’s as horribly flawed as an impact factor or an H-index.

What to do? There are two options. One is the change the way these overworked administrators function, to force them to read papers and consider a broad range of metrics — in other words, to change human nature. Yeah, it might work. But it’s not where the smart money is.

So perhaps the way to go is to give these people a better single number. A less wrong metric. An LWM.

Here’s what I propose in the paper.

In practice, it may be inevitable that overworked administrators will seek the simplicity of a single metric that summarises all of these. Given a range of metrics x1, x2xn, there will be a temptation to simply add them all up to yield a “super-metric”, x1 + x2 + … + xn. Such a simply derived value will certainly be misleading: no-one would want a candidate with 5,000 Twitter followers and no publications to appear a hundred times stronger than one with an H-index of 50 and no Twitter account.

A first step towards refinement, then, would weight each of the individual metrics using a set of constant parameters k1, k2kn to be determined by judgement and experiment. This yields another metric, k1·x1 + k2·x2 + … + kn·xn. It allows the down-weighting of less important metrics and the up-weighting of more important ones.

However, even with well-chosen ki parameters, this better metric has problems. Is it really a hundred times as good to have 10,000 Twitter followers than 100? Perhaps we might decide that it’s only ten times as good – that the value of a Twitter following scales with the square root of the count. Conversely, in some contexts at least, an H-index of 40 might be more than twice as good as one of 20. In a search for a candidate for a senior role, one might decide that the value of an H-index scales with the square of the value; or perhaps it scales somewhere between linearly and quadratically – with H-index1.5, say. So for full generality, the calculation of the “Less Wrong Metric”, or LWM for short, would be configured by two sets of parameters: factors k1, k2kn, and exponents e1, e2en. Then the formula would be:

LWM = k1·x1e1 + k2·x2e2 + … + kn·xnen

So that’s the idea of the LWM — and you can see now why I refer to this as a family of metrics. Given n metrics that you’re interested in, you pick 2n parameters to combine them with, and get a number that to some degree measures what you care about.

(How do you choose your 2n parameters? That’s the subject of the next post. Or, as before, you can skip ahead and read the paper.)

References

Like Stephen Curry, we at SV-POW! are sick of impact factors. That’s not news. Everyone now knows what a total disaster they are: how they are signficantly correlated with retraction rate but not with citation count; how they are higher for journals whose studies are less statistically powerful; how they incentivise bad behaviour including p-hacking and over-hyping. (Anyone who didn’t know all that is invited to read Brembs et al.’s 2013 paper Deep impact: unintended consequences of journal rank, and weep.)

Its 2016. Everyone who’s been paying attention knows that impact factor is a terrible, terrible metric for the quality of a journal, a worse one for the quality of a paper, and not even in the park as a metric for the quality of a researcher.

Unfortunately, “everyone who’s been paying attention” doesn’t seem to include such figures as search committees picking people for jobs, department heads overseeing promotion, tenure committees deciding on researchers’ job security, and I guess granting bodies. In the comments on this blog, we’ve been told time and time and time again — by people who we like and respect — that, however much we wish it weren’t so, scientists do need to publish in high-IF journals for their careers.

What to do?

It’s a complex problem, not well suited to discussion on Twitter. Here’s what I wrote about it recently:

The most striking aspect of the recent series of Royal Society meetings on the Future of Scholarly Scientific Communication was that almost every discussion returned to the same core issue: how researchers are evaluated for the purposes of recruitment, promotion, tenure and grants. Every problem that was discussed – the disproportionate influence of brand-name journals, failure to move to more efficient models of peer-review, sensationalism of reporting, lack of replicability, under-population of data repositories, prevalence of fraud – was traced back to the issue of how we assess works and their authors.

It is no exaggeration to say that improving assessment is literally the most important challenge facing academia.

This is from the introduction to a new paper which came out today: Taylor (2016), Better ways to evaluate research and researchers. In eight short pages — six, really, if you ignore the appendix — I try to get to grips with the historical background that got us to where we are, I discuss some of the many dimensions we should be using to evaluate research and researchers, and I propose a family of what I call Less Wrong Metrics — LWMs — that administrators could use if they really absolutely have to put a single number of things.

(I was solicited to write this by SPARC Europe, I think in large part because of things I have written around this subject here on SV-POW! My thanks to them: this paper becomes part of their Briefing Papers series.)

Next time I’ll talk about the LWM and how to calculate it. Those of you who are impatient might want to read the actual paper first!

References

I’d hoped that we’d see a flood of BRONTOSMASH-themed artwork, but that’s not quite happened. We’ve seen a trickle, though, and that’s still exciting. Here are the ones I know about. If anyone knows of more, please let me know and I will update this post.

First, in a comment on the post with my own awful attempts, Darius posted this sketch of a BROTOSMASH-themed intimidation display:

apatosaurinae_sp_scene

And in close-up:

apatosaurinae_sp_scene-closeup

Very elegant, and it’s nice to see an extension of our original hypothesis into other behaviours.

The next thing I saw was Mark Witton’s beautiful piece, described on his own site (in a post which coined the term BRONTOSMASH):

BRONTOSMASH Witton low res

And in close-up:

BRONTOSMASH Witton low res-closeup

I love the sense of bulk here — something of the elephant-seal extant analogue comes through — and the subdued colour scheme. Also, the Knight-style inclusion in the background of the individual in the swamp. (No, sauropods were not swamp-bound; but no doubt, like elephants, they spent at least some time in water.)

And finally (for now, at least) we have Matthew Inabinett’s piece, simply titled BRONTOSMASH:

brontosmash_by_cmipalaeo-d9dy1kg

I love the use of traditional materials here — yes, it still happens! — and I like the addition of the dorsal midline spike row to give us a full on TOBLERONE OF DOOM. (Also: the heads just look right. I wish I could do that. Maybe one day.)

Update (Monday 26 October)

Here is Oliver Demuth’s sketch, as pointed out by him in a comment.

uqske

Thanks, Oliver! Nice to see the ventral-on-dorsal combat style getting some love.

So that’s where we are, folks. Did I miss any? Is anyone working on new pieces on this theme? Post ’em in the comments!

 

In my recent preprint on the incompleteness and distortion of sauropod neck specimens, I discuss three well-known sauropod specimens in detail, and show that they are not as well known as we think they are. One of them is the Giraffatitan brancai lectotype MB.R.2181 (more widely known by its older designation HMN SII), the specimen that provides the bulk of the mighty mounted skeleton in Berlin.

Giraffatitan c8 epipophyses

That photo is from this post, which is why it’s disfigured by red arrows pointing at its epipophyses. But the vertebra in question — the eighth cervical of MB.R.2181 — is a very old friend: in fact, it was the subject of the first ever SV-POW! post, back in 2007.

In the reprint, to help make the point that this specimen was found extremely disarticulated, I reproduce Heinrich (1999:figure 16), which is Wolf-Dieter Heinrich’s redrawing of Janensch’s original sketch map of Quarry S, made in 1909 or 1910. Here it is again:

Taylor 2015: Figure 5. Quarry map of Tendaguru Site S, Tanzania, showing incomplete and jumbled skeletons of Giraffatitan brancai specimens MB.R.2180 (the lectotype, formerly HMN SI) and MB.R.2181 (the paralectotype, formerly HMN SII). Anatomical identifications of SII are underlined. Elements of SI could not be identified with certainty. From Heinrich (1999: figure 16), redrawn from an original field sketch by Werner Janensch.

Taylor 2015: Figure 5. Quarry map of Tendaguru Site S, Tanzania, showing incomplete and jumbled skeletons of Giraffatitan brancai specimens MB.R.2180 (the lectotype, formerly HMN SI) and MB.R.2181 (the paralectotype, formerly HMN SII). Anatomical identifications of SII are underlined. Elements of SI could not be identified with certainty. From Heinrich (1999: figure 16), redrawn from an original field sketch by Werner Janensch.

For the preprint, as for this blog-post (and indeed the previous one), I just went right ahead and included it. But the formal version of the paper (assuming it passes peer-review) will by very explicitly under a CC By licence, so the right thing to do is get formal permission to include it under those terms. So I’ve been trying to get that permission.

What a stupid, stupid waste of time.

Heinrich’s paper appeared in the somewhat cumbersomely titled Mitteilungen aus dem Museum fur Naturkunde in Berlin, Geowissenschaftliche Reihe, published as a subscription journal by Wiley. Happily, that journal is now open access, published by Pensoft as The Fossil Record. So I wrote to the Fossil Record editors to request permission. They wrote back, saying:

We are not the right persons for your question. The Wiley Company holds the copyright and should therefore be asked. Unfortunately, I do not know who is the correct person.

I didn’t know who to ask, either, so I tweeted a question, and copyright guru Charles Oppenheim suggested that I email permissions@wiley.com. I did, only to get the following automated reply:

Dear Customer,

Thank you for your enquiry.

We are currently experiencing a large volume of email traffic and will deal with your request within the next 15 working days.

We are pleased to advise that permission for the majority of our journal content, and for an increasing number of book publications, may be cleared more quickly by using the RightsLink service via Wiley’s websites http://onlinelibrary.wiley.com and www.wiley.com.

Within the next fifteen working days? That is, in the next three weeks? How can it possibly take that long? Are they engraving their response on a corundum block?

So, OK, let’s follow the automated suggestion and try RightsLink. I went to the Wiley Online Library, and searched for journals whose names contain “naturkunde”. Only one comes up, and it’s not the right one. So Wiley doesn’t admit the existence of the journal.

Despite this, Google finds the article easily enough with a simple title search. From the article’s page, I can just click on the “Request Permissions”  link on the right, and …

rightslink-fail

Well, there’s lots to enjoy here, isn’t there? First, and most important, it doesn’t actually work: “Permission to reproduce this content cannot be granted via the RightsLink service.” Then there’s that cute little registered-trademark symbol “®” on the name RightsLink, because it’s important to remind me not to accidentally set up my own rights-management service with the same name. In the same vein, there’s the “Copyright © 2015 Copyright Clearance Center, Inc. All Rights Reserved” notice at the bottom — copyright not on the content that I want to reuse, but on the RightsLink popup itself. (Which I guess means I am in violation for including the screenshot above.) Oh, and there’s the misrendering of “Museum für Naturkunde” as “Museum für Naturkunde”.

All of this gets me precisely nowhere. As far as I can tell, my only recourse now is to wait three weeks for Wiley to get in touch with me, and hope that they turn out to be in favour of science.

sadness_____by_aoao2-d430zrm

It’s Sunday afternoon. I could be watching Ireland play France in the Rugby World Cup. I could be out at Staverton, seeing (and hearing) the world’s last flying Avro Vulcan overfly Gloucester Airport for the last time. I could be watching Return of the Jedi with the boys, in preparation for the forthcoming Episode VII. Instead, here I am, wrestling with copyright.

How absolutely pointless. What a terrible waste of my life.

Is this what we want researchers to be spending their time on?

Promoting the Progress of Science and useful Arts, indeed.

Update (13 October 2015): a happy outcome (this time)

I was delighted, on logging in this morning, to find I had email from RIGHTS-and-LICENCES@wiley-vch.de with the subject “Permission to reproduce Heinrich (1999:fig. 16) under CC By licence” — a full thirteen working days earlier than expected. They were apologetic and helpful. Here is key part of what they said:

We are of course happy to handle your request directly from our office – please find the requested permission here:
We hereby grant permission for the requested use expected that due credit is given to the original source.
If material appears within our work with credit to another source, authorisation from that source must be obtained.
Credit must include the following components:
– Journals: Author(s) Name(s): Title of the Article. Name of the Journal. Publication  year. Volume. Page(s). Copyright Wiley-VCH Verlag GmbH & Co. KGaA. Reproduced with permission.

So this is excellent. I would of course have included all those elements in the attribution anyway, with the exception that it might not have occurred to me to state who the copyright holder is. But there is no reason to object to that.

So, two cheers for Wiley on this occasion. I had to waste some time, but at least none of it was due to deliberate obstructiveness, and most importantly they are happy for their figure to be reproduced under CC By.

References

  • Heinrich, Wolf-Dieter. 1999. The taphonomy of dinosaurs from the Upper Jurassic of Tendaguru, Tanzania (East Africa), based on field sketches of the German Tendaguru expedition (1909-1913). Mitteilungen aus dem Museum fur Naturkunde in Berlin, Geowissenschaftliche Reihe 2:25-61.

Since I posted my preprint “Almost all known sauropod necks are incomplete and distorted” and asked in the comments for people to let me know if I missed any good necks, the candidates have been absolutely rolling in:

I will be investigating the completeness of all of these and mentioning them as appropriate when I submit the revision of this paper. (In retrospect, I should have waited a week after posting the preprint before submitting for formal review; but I was so scared of letting it brew for years, as we’re still doing with the Barosaurus preprint to our shame, that I submitted it immediately.)

So we probably have a larger number of complete or near-complete sauropod necks than the current draft of this paper suggests. But still very few in the scheme of things, and essentially none that aren’t distorted.

So I want to consider why we have such a poor fossil record of sauropod necks. All of the problems with sauropod neck preservation arise from the nature of the animals.

First, sauropods are big. This is a recipe for incompleteness of preservation. (It’s no accident that the most completely preserved specimens are of small individuals such as CM 11338, the cow-sized juvenile Camarasaurus lentus described by Gilmore, 1925). For an organism to be fossilised, the carcass has to be swiftly buried in mud, ash or some other substrate. This can happen relatively easily to small animals, such as the many finely preserved stinkin’ theropods from the Yixian Formation in China, but it’s virtually impossible with a large animal. Except in truly exceptional circumstances, sediments simply don’t get deposited quickly enough to cover a 25 meter, 20 tonne animal before it is broken apart by scavenging, decay and water transport.

Taylor 2015: Figure 5. Quarry map of Tendaguru Site S, Tanzania, showing incomplete and jumbled skeletons of Giraffatitan brancai specimens MB.R.2180 (the lectotype, formerly HMN SI) and MB.R.2181 (the paralectotype, formerly HMN SII). Anatomical identifications of SII are underlined. Elements of SI could not be identified with certainty. From Heinrich (1999: figure 16), redrawn from an original field sketch by Werner Janensch.

Taylor 2015: Figure 5. Quarry map of Tendaguru Site S, Tanzania, showing incomplete and jumbled skeletons of Giraffatitan brancai specimens MB.R.2180 (the lectotype, formerly HMN SI) and MB.R.2181 (the paralectotype, formerly HMN SII). Anatomical identifications of SII are underlined. Elements of SI could not be identified with certainty. From Heinrich (1999: figure 16), redrawn from an original field sketch by Werner Janensch.

Secondly, even when complete sauropods are preserved, or at least complete necks, distortion of the preserved cervical vertebrae is almost inevitable because of their uniquely fragile construction. As in modern birds, the cervical vertebrae were lightened by extensive pneumatisation, so that they were more air than bone, with the air-space proportion typically in the region of 60–70% and sometimes reaching as high as 89%. While this construction enabled the vertebrae to withstand great stresses for a given mass of bone, it nevertheless left them prone to crushing, shearing and torsion when removed from their protective layer of soft tissue. For large cervicals in particular, the chance of the shape surviving through taphonomy, fossilisation and subsequent deformation would be tiny.

So I think we’re basically doomed never to have a really good sauropod neck skeleton.

Follow

Get every new post delivered to your Inbox.

Join 3,599 other followers