April 20, 2013
It’s well worth reading this story about Thomas Herndon, a graduate student who as part of his training set out to replicate a well-known study in his field.
The work he chose, Growth in a Time of Debt by Reinhart and Rogoff, claims to show that “median growth rates for countries with public debt over roughly 90 percent of GDP are about one percent lower than otherwise; average (mean) growth rates are several percent lower.” It has been influential in guiding the economic policy of several countries, reaffirming an austerity-based approach.
So here is Lesson zero, for policy makers: correllation is not causation.
To skip ahead to the punchline, it turned out that Reinhart and Rogoff made a trivial but important mechanical mistake in their working: they meant to average values from 19 rows of their spreadsheet, but got the formula wrong and missed out the last five. Those five included three countries which had experienced high growth while deep in debt, and which if included would have undermined the conclusions.
Therefore, Lesson one, for researchers: check your calculations. (Note to myself and Matt: when we revise the recently submitted Taylor and Wedel paper, we should be careful to check the SUM() and AVG() ranges in our own spreadsheet!)
Herndon was able to discover this mistake only because he repeatedly hassled the authors of the original study for the underlying data. He was ignored several times, but eventually one of the authors did send the spreadsheet. Which is just as well. But of course he should never have had to go chasing the authors for the spreadsheet because it should have been published alongside the paper.
Lesson two, for researchers: submit your data alongside the paper that uses it. (Note to myself and Matt: when we submit the revisions of that paper, submit the spreadsheets as supplementary files.)
Meanwhile, governments around the world were allowing policy to be influenced by the original paper without checking it — policies that affect the disposition of billions of pounds. Yet the paper only got its post-publication review because of an post-grad student’s exercise. That’s insane. It should be standard practice to have someone spend a day or two analysing a paper in detail before letting it have such a profound effect.
And so Lesson three, for policy makers: replicate studies before trusting them.
Ironically, this may be a case where the peer-review system inadvertently did actual harm. It seems that policy makers may have shared the widespread superstition that peer-reviewed publications are “authoritative”, or “quality stamped”, or “trustworthy”. That would certainly explain their allowing it to affect multi-billion-pound policies without further validation. [UPDATE: the paper wasn't peer-reviewed after all! See the comment below.]
Of course, anyone who’s actually been through peer-review a few times knows how hit-and-miss the process is. Only someone who’s never experienced it directly could retain blind faith in it. (In this respect, it’s a lot like cladistics.)
If a paper has successfully made it through peer-review, we should afford it a bit more respect than one that hasn’t. But that should never translate to blind trust.
In fact, let’s promote that to Lesson four: don’t blindly trust studies just because they’re peer-reviewed.
December 10, 2012
There’s been a lot of concern in some corners of the world about the Finch Report‘s preference for Gold open access, and the RCUK policy‘s similar leaning. Much of the complaining has focussed on the cost of Gold OA publishing: Article Processing Charges (APCs) are very offputting to researchers with limited budgets. I thought it would be useful to provide a page that I (and you) can link to when facing such concerns.
This is long and (frankly) a bit boring. But I think it’s important and needs saying.
1. How much does the Finch Report suggest APCs cost?
Worries about high publishing costs are exacerbated by the widely reported estimate of £2000 for a typical APC, attributed to the Finch Report. In fact, that is not quite what the report (page 61) says:
Subsequent reports also suggest that the costs for open access journals average between £1.5k and £2k, which is broadly in line with the average level of APCs paid by the Wellcome Trust in 2010, at just under £1.5k.
Still, the midpoint of Finch’s “£1.5k-£2k” range is £1750, which is still a hefty amount. Where does it come from? A footnote elucidates:
Houghton J et al, op cit; Heading for the Open Road: costs and benefits of transitions in scholarly communications, RIN, PRC, Wellcome Trust, JISC, RLUK, 2011. See also Solomon, D, and Björk, B-Christer,. A study of Open Access Journals using article processing charges. Journal of the American Society for Information Science and Technology , which suggests an average level of APCs for open access journal (including those published at very low cost in developing countries) of just over $900. It is difficult to judge – opinions differ – whether costs for open access journals are on average likely to rise as higher status journals join the open access ranks; or to fall as new entrants come into the market.
[An aside: these details would probably be better known, and the details of the Finch report would be discussed in a more informed way, if the report were available on the Web in a form where individual sections could be linked, rather than only as a PDF.]
The first two cited sources look good and authoritative, being from JISC and a combination of well-respected research organisations. Nevertheless, the high figure that they cite is misleading, and unnecessarily alarming, for several reasons.
2. Why the Finch estimate is misleading
2.1. It ignores free-to-the-author journals.
The Solomon and Björk analysis that the Finch Report rather brushes over is the only one of the three to have attempted any rigorous numerical analysis, and it found as follows (citing an earlier study):
Almost 23,000 authors who had published an article in an OA journal where asked about how much they had paid. Half of the authors had not paid any fee at all, and only 10% had paid fees exceeding 1,000 Euros [= £812, less than half of the midpoint of Finch's range].
And the proportion of journals that charge no APC (as opposed to authors who paid no fee) is even higher — nearly three quarters:
As of August 2011 there were 1,825 journals listed in the Directory of Open Access Journals (DOAJ) that, at least by self-report, charge APCs. These represent just over 26% of all DOAJ journals.
So there are a lot of a zero-cost options. And there are by no means all low-quality journals: they include, for example, Acta Palaeontologica Polonica and Palaeontologia Electronica in our own field of palaeontology, the Journal of Machine Learning Research in computer science and Theory and Applications of Categories in maths.
2.2. It ignores the low average price found by the Solomon and Björk analysis.
The Solomon and Björk paper is full of useful information and well worth detailed consideration. They make it clear in their methodology section that their sample was limited only to those journals that charge a non-zero APC, and their analysis concluded:
[We studied] 1,370 journals that published 100,697 articles in 2010. The average APC was 906 US Dollars (USD) calculated over journals and 904 US Dollars USD calculated over articles.
(The closeness of the average across journals and dollars is important: it shows that the average-by-journals is not being artificially depressed by a large number of very low-volume journals that have low APCs.)
2.3. It focusses on authors who are spending Other People’s Money.
Recall that Finch’s “£1.5k-£2k” estimate is justified in part by the observation that the APC paid by the Wellcome Trust in 2010 was just under £1.5k. But it’s well established that people spending Other People’s Money get less good value than when they spend their own: that’s why travellers who fly business class when their employer is paying go coach when they’re paying for themselves. (This is an example of the principal-agent problem.)
It’s great that the Wellcome Trust, and some other funders, pay Gold OA fees. For researchers in this situation, APCs should not be problem; but for the rest of us (and, yes, that includes me — I’ve never had a grant in my life) there are plenty of excellent lower-cost options.
And as noted above, lower cost, or even no cost, does not need to mean lower quality.
2.4. It ignores the world’s leading open-access journal.
PLOS ONE publishes more articles than any other journal in the world, has very high production values, and for those who care about such things has a higher impact-factor than almost any specialist palaeontology journal. Its APC is $1350, which is currently about £839 — less than half of the midpoint of Finch’s “£1.5k-£2k” range.
Even PLOS’s flagship journal — PLOS Biology, which is ranked top in the JCR’s biology section, charges $2900, about £1802, which is well within the Finch range.
Meanwhile, over in the humanities (where much of the negative reaction to Finch and RCUK is to be found), the leading open-access megajournal is much cheaper even than PLOS ONE: SAGE Open currently offers an introductory APC of $195 (discounted from the regular price of $695).
2.5. It ignores waivers
The most important, and most consistently overlooked fact among those who complain about how they don’t have any funds for Gold-OA publishing is that many Gold-OA journals offer waivers.
For example, PLOS co-founder Michael Eisen affirms (pers. comm.) that it’s explicitly part of the PLOS philosophy that no-one should be prevented from publishing in a PLOS journal by financial issues. And that philosophy is implemented in the PLOS policy of offering waivers to anyone who asks for one. (For example, my old University of Portsmouth colleagues, Mark Witton and Darren Naish certainly had no funds from UoP to support publication of their azhdarchid palaeobiology paper in PLOS ONE; they asked for a waiver and got it, no questions asked.)
Other major open-access publishers have similar polices.
2.6. It doesn’t recognise how the publishing landscape is changing.
It’s not really a criticism of the Finch Report — at least, not a fair one — that its coverage of eLife and PeerJ is limited to a single passing mention on page 58. Neither of these initiatives had come into existence when the report was drafted. Nevertheless, they have quickly become hugely important in shaping the world of publishing — it’s not a stretch to say that they have already joined BMC and PLOS in defining the shape of the open access world.
For the first few years of operation, eLife is waiving all APCs. It remains to be seen what will happen after that, but I think there are signs that their goal may be to retain the no-APC model indefinitely. PeerJ does charge, but is ridiculously cheap: a one-off payment of $99 pays for a publication every year for life; or $299 for any number of publications at any time. Those numbers are going to skew the average APC way, way down even from their current low levels.
2.7. I suspect it concentrates on hybrid-OA journals.
Why do people use hybrid journals when they are more expensive than fully OA journals and offer so much less (e.g. limited length, no colour, number of figures)? I suspect hybrid OA is the lazy option for researchers who have to conform to an OA mandate but don’t want to invest any time or effort in thinking about open-access options. It’s easy to imagine such researchers just shoving their work into in the traditional paywalled journal, and letting the Wellcome grant pick up the tab. After all, it’s Other People’s Money.
If grant-money for funding APCs becomes more scarce as it’s required to stretch further, then researchers who’ve been taking this sort of box-checking approach to fulfilling OA mandates are going to be forced to think more about what they’re doing. And that’s a good thing.
3. What is the true average cost?
If we put all this together, and assume that researchers working from RCUK funds will make some kind of effort to find good-value open-access journals for their work instead of blindly throwing it at traditional subscription journals and expecting RCUK to pick up the fee, here’s where we land up.
- About half of authors currently pay no fee at all.
- Among those that do pay a fee, the average is $906.
- So the overall average fee is about $453.
- That’s about £283, which is less than one sixth of what Finch suggests.
4. What are we comparing with?
It’s one thing to find a more realistic cost for an average open-access article. But we also need to realise that we’re not comparing with zero. Authors have always paid publication fees in certain circumstances — subscription journals have levied page charges, extra costs for going past a certain length, for colour figures, etc. For example, Elsevier’s American Journal of Pathology charges authors “$550 per color figure, $50 per black & white or grayscale figure, and $50 per composed table, per printed page”. So a single colour figure in that journal costs more than the whole of a typical OA article.
But that’s not the real cost to compare with.
The real cost is what the world at large pays for each paywalled article. As we discussed here in some detail, the aggregate subscription paid to access an average paywalled article is about $5333. That’s as much as it costs to publish nearly twelve average open-access articles — and for that, you get much less: people outside of universities can’t get it even after the $5333 has been paid.
5. Directing our anger properly
Now think about this: the Big Four academic publishers have profit-margins between 32.4% and 42%. Let’s pick an typical profit margin of 37% — a little below the middle of that range. Assuming this is pretty representative across all subscription publishers — and it will be, since the Big Four control so much of the volume of subscription publishing — that means that 37% of the $5333 of an average paywalled article’s subscription money is pure profit. So $1973 is leaving academia every time a paper is “published” behind a paywall.
So every time a university sends a paper behind a paywall, the $1973 that it burns could have funded four average-priced Gold-OA APCs. Heck, even if you want to discount all the small publishers and put everything in PLOS — never taking a waiver — it would pay for one and a half PLOS ONE articles.
So let me leave you with this. In recent weeks, I’ve seen a fair bit of anger directed at the Finch Report and the RCUK policy. Some researchers have been up in arms at the prospect of having to “pay to say“. I want to suggest that this anger is misdirected. Rather than being angry with a policy that says you need to find $453 when you publish, direct your anger at publishers who remove $1973 from academia every time you give them a paper.
Folks, we have to have the vision to look beyond what is happening right now in our departments. Gold OA does, for sure, mean a small amount of short-term pain. It also means a massive long-term win for us all.
A couple of weeks ago we tried to work out what it costs the global academic community when you publish a paper behind an Elsevier paywall instead of making it open access. The tentative conclusion was that it’s somewhere between £3112 and £6224 (or about $4846-9692), which is about 3.6-7.2 times the cost of publishing in PLoS ONE.
That calculation was fraught with uncertainty, because it’s so difficult to get solid numbers out of Elsevier. So let’s try a simpler one.
In 2009, The STM report: an overview of scientific and scholarly journal publishing reported (page 5) that:
The annual revenues generated from English-language STM journal publishing are estimated at about $8 billion in 2008, up by 6-7% compared to 2007, within a broader STM publishing market worth some $16 billion.
There were about 25,400 active scholarly peer-reviewed journals in early 2009, collectively publishing about 1.5 million articles a year.
8 billion dollars divided by 1.5 million articles yields a per-article revenue to the STM industry of $5333. And since publisher revenue is the same as academia’s expenditure on publishing, that is the per-article cost to Academia.
(What about the articles currently published as gold open access? Don’t they cut down the number that are being bought through subscriptions, and so raise the average price of a paywalled article? Yes, but not by much: according to page 7 of the report, “about 2% of articles are published in full open access journals” — a small enough proportion that we can ignore it for the purposes of this calculation.)
What can we make of this $5333 figure? For a start, it’s towards the bottom of the $4846-9692 Elsevier range — only 10% of the way up that range. So the balance of probability strongly suggests that Elsevier’s prices are above the industry-wide average, but not hugely above — somewhere between 10% below and 80% above the average.
More importantly, each paywalled article costs the world as much as four PLoS ONE articles. In other worlds, if we all stopped submitted to paywalled journals today and sent all our work to PLoS ONE instead, the total scholarly publishing bill would fall by 75%, from $8 billion to $2 billion.
Why am I comparing with PLoS ONE’s $1350? There are other comparisons I could use — for example, the average cost of $906 calculated by of Solomon and Björk across 100,697 open-access articles in 1,370 journals. But that figure is probably propped up by journals that are deliberately being run at a loss in order to gain visibility or prestige. PLoS ONE is a more conservative comparison point because we know its $1350 is enough for it to run at a healthy operating profit. So we know that a switch to PLoS ONE and similar journals would be financially sustainable.
But there’s certainly no reason to think that PLoS ONE’s price of $1350 is as low as you can go and still have good-quality peer-reviewed gold open access. For example, PLoS ONE’s long-time Editor-in-Chief, Pete Binfield, thinks that it can be done, at a profit, for $99 — a staggering 92% price-cut from the $1350 figure we’ve been using. If he’s right — and he’s betting his mortgage that he is — then we could have 54 per-reviewed articles in PeerJ for every one that goes behind a paywall.
It’s too early to know whether PeerJ will work (and I’ll talk about that more another time). But the very fact that someone as experienced and wily as Binfield thinks it will — and was able to attract venture capital from a disinterested and insightful party — strongly indicates that this price-point is at least in the right ballpark.
Which is more than can be said for the Finch Report’s ludicrous over-estimate of £1500-£2000.
July 9, 2012
What does it cost to publish a paper in a non-open access Elsevier journal? The immediate cost to the author is often zero (though page charges, and fees for colour illustrations mean this is not always true). But readers have to pay to see the paper, either directly in the case of private individuals or through library budgets in the case of university staff and students. What is the total cost to the world?
It’s a calculation that I’ve taken a couple of stabs at in public forums, but in both cases space restraints meant that I couldn’t lay out the reasoning in the detail I’d like — and as a result I couldn’t get the kind of detailed feedback that would allow me to refine the numbers. So I am trying again here.
The first version of the calculation was in my article Open, moral and pragmatic at Times Higher Education:
According to Elsevier’s annual report for 2010, it publishes about “200,000 new science & technology research articles each year”. The same report reveals revenues for 2010 of £2.026 billion. This works out as £10,130 per article, each made available only to the tiny proportion of the world’s population that has access to a subscribing library.
As Kent Anderson pointed out in an otherwise misleading comment, that calculation was flawed in that I was using the total of Elsevier revenue rather than just the portion that comes from journal subscriptions. Trying to fix this, and using more up-to-date figures, I provided a better estimate in Academic Publishing Is Broken at The Scientist:
To publish in an Elsevier journal … appears to cost some $10,500. In 2011, 78 percent of Elsevier’s total revenue, or £1,605 million, was contributed by journal subscriptions. In the same year, Elsevier published 240,000 articles, making the average cost per article some £6,689, or about $10,500 US.
But this, it turns out, is also an over-estimate, because it’s 78% of Elsevier’s Scientific, Technical and Medical revenue that comes from journal subscriptions; the other half of Elsevier, their Health Sciences division, has its own revenues.
The data we have to work with
Here’s what I have right now — using data from 2010, the last complete year for which numbers are available.
Bear in mind that Elsevier is a publisher, and Reed Elsevier is a larger company that owns Elsevier and a bunch of other businesses such as Lexis Nexus. According to the notes from a Reed Elsevier investment seminar that took place on December 6, 2011 in London:
- Page 2: 34% of Reed Elsevier’s total 2010 revenue of £6,055M (i.e. £2058.7M) was from “Science and Medical”, which I take to mean Elsevier. This is in keeping with the total revenue number from Elsevier’s annual report.
- Page 8: Elsevier’s revenues are split 50-50 between the Scientific & Technical division and the Health Sciences division. 39% of total Elsevier revenue (i.e. £803M) is from research journals in the S&T sector. No percentage is given for research journal revenue in Health Sciences.
- Page 18: confirmation that 78% of Scientific & Technical revenue (i.e. 39% of total Elsevier revenue) is from research journals.
- Page 21: total number of articles published in 2010 seems to be about 258,000 (read off from the graph).
- Page 22 confirms “>230,000 articles per year”.
- Page 23, top half, says “>80% of revenue derived from subscriptions, strongly recurring revenues”. Bottom half confirms earlier revenue of 78% for research journals. I suppose that the “subscriptions” amounting to >80% must include database subscriptions.
The other important figure is the proportion of Elsevier journal revenue that comes from Gold OA fees rather than subscriptions. The answer is, almost none. Figures for 2010 are no longer on Elsevier’s Sponsored Articles page, but happily we quoted it in an older SV-POW! post:
691 Elsevier articles across some six hundred journals were sponsored in 2010. Sponsorship revenues from these articles amounted to less than 0.1% of Elsevier’s total revenues.
So for the purposes of these rough-and-ready calculations, we can ignore Elsevier’s Gold-OA revenue completely and assume that all research-journal revenue is from subscriptions.
The data we don’t have
The crucial piece of information we don’t have is this: how much of Elsevier Health Sciences revenue is from journal subscriptions? This information is not included in the investor report, and my attempts to determine it have so far been wholly unsuccessful. Back in March, I contacted Liz Smith (VP/Director of Global Internal Communications), Alicia Wise (Director of Universal Access), Tom Reller (VP of Global Corporate Relations), Ron Mobed (CEO of Scientific & Technical) and Michael Hansen (CEO of Health Sciences). Of these, only Tom Reller got back to me — he was helpful, and pointed me to the investor report that I cite heavily above — but wasn’t able to give me a figure.
If anyone knows the true percentage — or can even narrow the range a bit — I would love to know about it. Please leave a comment.
In the mean time, I will proceed with calculations on two different bases:
- That Health Sciences revenue is proportioned the same as Scientific & Technical, i.e. 78% comes from journal subscriptions;
- That Health Sciences has no revenue from journal subscriptions. This seems very unrealistic to me, but will at least give us a hard lower bound.
It’s pretty simple.
If HS journal-subscription revenue is zero, then Elsevier’s total from journal subscriptions in 2010 was £803M. On the other hand, if HS revenue proportions are about the same as in S&T, then total journal-subscription revenue was twice this, £1606M.
Across the 258,000 or so articles published in 2010, that yields either £803M / 258,000 = £3112 per article, or £1606M / 258,000 = £6224 per article. At current exchange rates, that’s $4816 or $9632. My guess is that the true figure is somewhere between these extremes. If I had to give a single figure, I guess I’d split the difference and go with £4668, which is about $7224.
Remember: this is what it costs the academic world to get access to your article when you give it to an Elsevier journal. Those parts of the academic world that have access, that is — don’t forget that many universities and almost everyone outside a university won’t be able to access it at all.
This is less than my previous estimates. It’s still an awful lot.
Why this matters
Over on Tim Gowers’ blog, he’s recently announced the launch of a new open-access maths journal, Forum of Mathematics, to be published by Cambridge University Press. The new journal will have an article processing fee of £500 after the first three years, during which all fees will be waived. I’ve been shocked at the vehemence with which a lot of commenters have objected to the ideas of any article processing fee.
Here’s the thing. For each maths article that’s sent to an Elsevier journal, costing the worldwide maths community between £3112 and £6224, that same worldwide maths community could instead pay for six to twelve open-access articles in the new journal. And those articles would then be available to anyone who wanted them, not only people affiliated with subscribing institutions.
To me, the purely economic argument for open access is unanswerable. Even if you leave aside the moral argument, the text-mining argument, and so on, you’re left with a very stark financial equation. It’s madness to give research to subscription publishers.
In the middle of February, Times Higher Education ran a piece by Elsevier boycott originator Tim Gowers, entitled Occupy publishing. A week ago, they published a letter in response, written by Elsevier Senior VP David Clark, under the title If it ain’t broke, don’t bin it, in which he argued that “there is little merit in throwing away a system that works in favour of one that has not even been developed yet”.
Seeing the current journal system, with its arbitrary barriers, economic inefficiencies and distorted perspective on impact, described as “a system that works” was more than I could bear. So I sent a letter in response, and it’s published in today’s issue as Open, moral and pragmatic.
Space limitations of THE letters meant that I was only able to address one aspect — the economics. Based on numbers in their own annual report, I show that the cost of each article that Elsevier makes available to subscribers is twelve times the cost of each article that PLoS makes available to the world. And since Elsevier’s 200,000 articles per year are about a seventh of the total global output, the money paid to Elsevier alone would easily pay for every single paper to be published as open access. Easily.
No doubt there are errors in some of the numbers, which are necessarily estimates; and the calculation is overly simplistic. But even allowing for that, there is plenty enough slop in the figures that the conclusion stands. If we stopped paying Elsevier subscriptions alone — we can keep Wiley, Springer and the rest — the money we save would pay for all our work to be available to the whole world, with hundreds of millions of pounds left over to fund more research.
Worried about the lack of jobs in palaeontology? Concerned that universities are reducing the number of tenure-track positions? Disturbed by the elimination of curators and preparators from museums? We need to cut the inefficient, profiteering publishers out of the loop.
February 8, 2012
How many open-access papers are getting published these days? And who’s doing it? Inspired by a tweet from @labroides (link at the end so as not to give away the punchline), I went looking for numbers.
We’ll start with our old friends Elsevier, since they are the world’s largest academic publisher by volume and by revenue. One often reads statements such as “Elsevier is committed to Universal Access, Quality and Sustainability … Elsevier wants to enable the broadest possible access to quality research content in sustainable ways that meet our many constituents’ needs” (from their page Elsevier’s position on Access). Even their submission to the OSTP call for comments begins by saying “One of Elsevier’s primary missions is to work towards providing universal access to high-quality scientific information in sustainable ways. We are committed to providing the broadest possible access to our publications.”
The most important way Elsevier does this is by allowing authors to pay a fee, currently $3000, to “sponsor” their articles, so that they are made freely available to readers (though we still don’t know under what specific licence!). While that fee is more than twice the $1350 that PLoS ONE charges, it’s comparable to the $2900 PLoS Biology fee and identical to Springer’s $3000 fee. Elsevier have rather a good policy in connection with their “sponsored article” fee: “Authors can only select this option after receiving notification that their article has been accepted for publication. This prevents a potential conflict of interest where a journal would have a financial incentive to accept an article.”
According to the page linked above, “691 Elsevier articles across some six hundred journals were sponsored in 2010. Sponsorship revenues from these articles amounted to less than 0.1% of Elsevier’s total revenues.” (And indeed, 691 × $3000 = $2.073 M, which is about 0.065% of their 2010 revenue of £2026 M ≈ $3208 M.) As Elsevier publishes 2639 journals in all, that amounts to just over a quarter of one open-access article per journal across the year.
I find that disappointing.
In the other corner (I won’t call it red or blue because of the political implications of those colours, which by the way are the opposite way around on different sides of the Atlantic. Anyway …) In the other corner, we have PLoS ONE. According to its Advanced Search engine, this journal alone published 6750 open-access articles in 2010 — about ten times as many as all Elsevier journals combined. Indeed, in the last month of that year alone, PLoS ONE’s 847 articles comfortably exceeded Elsevier’s output for the year. That’s one journal, in one month, up against a stable of 2639 journals across a whole year.
What can we take away from this? Maybe not very much: Elsevier offer their sponsored-article option to all authors, after all, and they can hardly be blamed if the authors don’t take them up on it.
But why don’t they? Tune in next time for some thoughts on that.
And, finally, here is the tweet that started this line of thought:
October 22, 2011
[This post is mostly a rehash of a comment I made on the last one, but I guess more people see posts than comments. Oh, and I will try to post something about sauropod vertebrae Real Soon Now.]
Last time out, Michael Richmond suggested that one way towards an open-access world is pointing out to decision makers that open-access publishing/reading is cheaper, and commented “that approach will only work if the open-access journals are much less expensive. Are they?”
As I’ve noted elsewhere, the difficulty in shifting to author-pays open access is that universities’ libraries and research departments are funded separately, so that when the extra costs to the latter result in savings for the former, it doesn’t look like a good deal (in the short term) for the research departments.
But let’s ignore that for now, and imagine a perfect economy where universities could shift money from the subscriptions that libraries buy to the publication fees that departments pay. If we could reassign all that money, would the universities spend more or less in total?
The answer may surprise you. A recent article on the Poetic Economics blog shows that Elsevier’s 2009 profits of more than $2.075 billion, divided by the world’s total scholarly output of 1.5 million articles per year, comes out to $1383 per article.
Now as it happens, PLoS ONE’s publication fee is $1350 — $33 less.
So think about it. That means the money that Elsevier alone takes out of academia — not its turnover but its profits, which are given to shareholders who have nothing to do with scholarly work — is enough to fund every research article in every field in the world as open access at PLoS ONE’s rate.
(And remember that PLoS is now making a profit at that rate — no longer living off the grants that helped to get it started. At a rate of $1350 per article, it’s not just surviving but flourishing, so we know that that’s a reasonable commercial rate to charge for handling an open-access academic article with no limits on length or on number of high-resolution colour figures.)
Isn’t that … astonishing?
Isn’t it … scandalous?
ONE COMMERCIAL PUBLISHER is taking out of the system enough money for everything to be open to the world. Everything. In the world. Open to the world.
if we all stopped buying Elsevier journals — just Elsevier, no other publisher — and if we threw away the proportion of the savings that Elsevier spends on costs, including salaries; then the profits alone would have been sufficient to fund every single research article in the world to be published in PLoS ONE — freely available to the whole world.
What would this mean? Dentists would be able to keep up with the relevant literature. Small businesses would be able to make plans with full information. The Climate Code Foundation would have a sounder and more up-to-date scientific basis for its work. Patient groups would be able to understand their diseases and give informed consent for treatment. Medical charities, amateur palaeontologists, ornithologists and so many more would have access to the information they need. Researchers in third-world countries could have the information they need to cope with life-threatening issues of health, food and water.
We can have all that for our $2.075 billion per year. Or we can keep giving it to Elsevier’s shareholders. Giving it, remember: not buying something with it. Don’t forget, this is not the money that Elsevier absorbs as its costs: salaries, rent, connectivity, what have you. This is their profit. It’s pure profit. This is the money that is taken out of the system.
So, yes, open access is cheaper. Stupidly cheaper. Absurdly, ridiculously, appallingly cheaper.
Update (later the same day)
In an article posted just an hour ago, Cambridge research-group head Peter Murray-Rust comes right out and says it: closed access means people die. That’s the bottom line. Follow his syllogism:
- Information is a key component of health-care
- Closed access publishers make money by restricting access to information.
- The worse the medicine and healthcare, etc. the more people die.
Are any of those statements false? And if not, is there any way to construe them that doesn’t lead by simply logic to the conclusion that closed access means people die? I don’t see one.
CORRECTION (Monday 24th)
Please see Jeff Hecht’s comment below for an important correction: Elsevier’s annual profits are “only” 60% of the figure originally cited. Which means we’d need to throw in Springer’s profits, too, in order to open-access everything. My bad — thanks for the correction, Jeff.
Why we do mass estimates
Mass estimates are a big deal in paleobiology. If you want to know how much an animal needed in terms of food, water, and oxygen, or how fast it could move, or how many offspring it could produce in a season, or something about its heat balance, or its population density, or the size of its brain relative to its body, then at some point you are going to need a mass estimate.
All that is true, but it’s also a bit bogus. The fact is, people like to know how big things are, and paleontologists are not immune to this desire. We have loads of ways to rationalize our basic curiosity about the bigness of extinct critters. And the figuring out part is both very cool and strangely satisfying. So let’s get on with it.
Two roads diverged
There are two basic modes for determining the mass of an extinct animal: allometric, and volumetric. Allometric methods rely on predictable mathematical relationships between body measurements and body mass. You measure a bunch of living critters, plot the results, find your regression line, and use that to estimate the masses of extinct things based on their measurements. Allometric methods have a couple of problems. One is that they are absolutely horrible for extrapolating to animals outside the size range of the modern sample, which ain’t so great for us sauropod workers. The other is that they’re pretty imprecise even within the size range of the modern sample, because real data are messy and there is often substantial scatter around the regression line, which if faithfully carried through the calculations produces large uncertainties in the output. The obvious conclusion is that anyone calculating extinct-animal masses by extrapolating an allometric regression ought to calculate the 95% confidence intervals (e.g. “Argentinosaurus massed 70000 kg, with a 95% confidence interval of 25000-140000 kg), but, oddly, no-one seems to do this.
Volumetric methods rely on creating a physical, digital, or mathematical model of an extinct animal, determining the volume of the model, multiplying by a scale factor to get the volume of the animal in life, and multiplying that by the presumed density of the living animal to get its mass. Volumetric methods have three problems: (1) many extinct vertebrates are known from insufficient material to make a good 3D model of the skeleton; (2) even if you have a complete skeleton, the method is very sensitive to how you articulate the bones–especially the ribcage–and the amount of flesh you decide to pack on, and there are few good guidelines for doing this correctly; and (3) relatively small changes in the scale factor of the model can produce big changes in the output, because mass goes with the cube of the linear measurement. If your scale factor is off by 10%, you mass will be off by 33% (1.1^3=1.33).
On the plus side, volumetric mass estimates are cheap and easy. You don’t need hundreds or thousands of measurements and body masses taken from living animals; you can do the whole thing in your kitchen or on your laptop in the space of an afternoon, or even less. In the old days you’d build a physical model, or buy a toy dinosaur, and use a sandbox or a dunk tank to measure the volume of sand or water that the model displaced, and go from there. Then in the 90s people started building digital 3D models of extinct animals and measuring the volumes of those.
But you don’t need a physical model or a dunk tank or even a laptop to do volumetric modeling. Thanks to a method called graphic double integration or GDI, which is explained in detail in the next section, you can go through the whole process with nothing more than pen and paper, although a computer helps.
Volumetric methods in general, and GDI in particular, have one more huge advantage over allometric methods: they’re more precise and more accurate. In the only published study that compares the accuracy of various methods on extant animals of known mass, Hurlburt (1999) found that GDI estimates were sometimes off by as much as 20%, but that allometric estimates were much worse, with several off by 90-100% and one off by more than 800%. GDI estimates were not only closer to the right answers, they also varied much less than allometric methods. On one hand, this is good news for GDI afficionados, since it is the cheapest and easiest of all the mass estimation methods out there. On the other hand, it should give us pause that on samples of known mass, the best available method can still be off by as much as a fifth even when working with complete bodies, including the flesh. We should account for every source of error that we can, and still treat our results with appropriate skepticism.
Graphic Double Integration
GDI was invented by Jerison (1973) to estimate the volumes of cranial endocasts. Hurlburt (1999) was the first to apply it to whole animals, and since then it has been used by Murray and Vickers-Rich (2004) for mihirungs and other extinct flightless birds, yours truly for small basal saurischians (Wedel 2007), Mike for Brachiosaurus and Giraffatitan (Taylor 2009), and probably many others that I’ve missed.
GDI is conceptually simple, and easy to do. Using orthogonal views of a life restoration of an extinct animal, you divide the body into slices, treat each slice as an ellipse whose dimensions are determined from two perspectives, compute the average cross-sectional area of each body part, multiply that by the length of the body part in question, and add up the results. Here’s a figure from Murray and Vickers-Rich (2004) that should clarify things:
One of the cool things about GDI is that it is not just easy to separate out the relative contributions of each body region (i.e., head, neck, torso, limbs) to the total body volume, it’s usually unavoidable. This not only lets you compare body volume distributions among animals, it also lets you tinker with assigning different densities to different body parts.
An Example: Plateosaurus
Naturally I’m not going to introduce GDI without taking it for a test drive, and given my proclivities, that test drive is naturally going to be on a sauropodomorph. All we need is an accurate reconstruction of the test subject from at least two directions, and preferably three. You could get these images in several ways. You could take photographs of physical models (or toy dinosaurs) from the front, side, and top–that could be a cool science fair project for the dino-obsessed youngster in your life. You could use the white-bones-on-black-silhouette skeletal reconstructions that have become the unofficial industry standard. You could also use orthogonal photographs of mounted skeletons, although you’d have to make sure that they were taken from far enough away to avoid introducing perspective effects.
For this example, I’m going to use the digital skeletal reconstruction of the GPIT1 individual of Plateosaurus published by virtual dino-wrangler and frequent SV-POW! commenter Heinrich Mallison (Mallison et al 2009, fig. 14). I’m using this skeleton for several reasons: it’s almost complete, very little distorted, and I trust that Heinrich has all the bits in the right places. I don’t know if the ribcage articulation is perfect but it looks reasonable, and as we saw last time that is a major consideration. Since Heinrich built the digital skeleton in digital space, he knows precisely how big each piece actually is, so for once we have scale bars we can trust. Finally, this skeleton is well known and has been used in other mass estimate studies, so when I’m done we’ll have some other values to compare with and some grist for discussion. (To avoid accidental bias, I’m not looking at those other estimates until I’ve done mine.)
Of course, this is just a skeleton, and for GDI I need the body outline with the flesh on. So I opened the image in GIMP (still free, still awesome) and drew on some flesh. Here we necessarily enter the realm of speculation and opinion. I stuck pretty close to the skeletal outline, with the only major departures being for the soft tissues ventral to the vertebrae in the neck and for the bulk of the hip muscles. As movie Boromir said, there are other paths we might take, and we’ll get to a couple of alternatives at the end of the post.
This third image is the one I used for actually taking measurements. You need to lop off the arms and legs and tote them up separately from the body axis. I also filled in the body outlines and got rid of the background so I wouldn’t have any distracting visual clutter when I was taking measurements. I took the measurements using the measuring tool in GIMP (compass icon in the toolbar), in orthogonal directions (i.e., straight up/down and left/right), at regular intervals–every 20 pixels in this case.
One thing you’ll have to decide is how many slices to make. Ideally you’d do one slice per pixel, and then your mathematical model would be fairly smooth. There are programs out there that will do this for you; if you have a 3D digital model you can just measure the voxels (= pixels cubed) directly, and even if all you have is 2D images there are programs that will crank the GDI math for you and measure every pixel-width slice (Motani 2001). But if you’re just rolling with GIMP and OpenOffice Calc (or Photoshop and Excel, or calipers and a calculator), you need to have enough slices to capture most of the information in the model without becoming unwieldy to measure and calculate. I usually go with 40-50 slices through the body axis and 9 or 10 per limb.
The area of a circle is pi*r^2, and the area of an ellipse is pi*r*R, where r and R are the radii of the minor and major axes. So enter the widths and heights of the body segments in pixels in two columns (we’ll call them A and B) in your spreadsheet, and create a third column with the function 3.14*A1*B1/4. Divide by four because the pixel counts you measured on the image are diameters and the formula requires radii. If you forget to do that, you are going to get some wacky numbers.
One obvious departure from reality is that the method assumes that all of the body segments of an animal have elliptical cross-sections, when that is often not exactly true. But it’s usually close enough for the coarse level of detail that any mass estimation method is going to provide, and if it’s really eating you, there are ways to deal with it without assuming elliptical cross-sections (Motani 2001).
For each body region, average the resulting areas of the individual slices and multiply the resulting average areas by the lengths of the body regions to get volumes. Remember to measure the lengths at right angles to your diameter measurements, even when the body part in question is curved, as is the tail of Heinrich’s Plateosaurus.
For sauropods you can usually treat the limbs as cylinders and just enter the lateral view diameter twice, unless you are fortunate enough to have fore and aft views. It’s not a perfect solution but it’s probably better than agonizing over the exact cross sectional shape of each limb segment, since that will be highly dependent on how much flesh you (or some other artist) put on the model, and the limbs contribute so little to the final result. For Plateosaurus I made the arm circular, the forearm and hand half as wide as tall, the thigh twice as long as wide, and the leg and foot round. Don’t forget to double the volumes of the limbs since they’re paired!
We’re not done, because so far all our measurements are in pixels (and pixels cubed). But already we know something cool, which is what proportion each part of the body contributes to the total volume. In my model based on Heinrich’s digital skeleton, segmented as shown above, the relative contributions are as follows:
- Head: 1%
- Neck: 3%
- Trunk: 70%
- Tail: 11%
- Forelimbs (pair): 3%
- Hindlimbs (pair): 12%
Already one of the great truths of volumetric mass estimates is revealed: we tend to notice the extremities first, but really it is the dimensions of the trunk that drive everything. You could double the size of any given extremity and the impact on the result would be noticeable, but small. Consequently, modeling the torso accurately is crucial, which is why we get worried about the preservation of ribs and the slop inherent in complex joints.
The 170 cm scale bar in Heinrich’s figure measures 292 pixels, or 0.582 cm per pixel. The volume of each body segment must be multiplied by 0.582 cubed to convert to cubic cm, and then divided by 1000 to convert to liters, which are the lingua franca of volumetric measurement. If you’re a math n00b, your function should look like this: volume in liters = volume in pixels*SF*SF*SF/1000, where SF is the scale factor in units of cm/pixel. Don’t screw up and use pixels/cm, or if you do, remember to divide by the scale factor instead of multiplying. Just keep track of your units and everything will come out right.
If you’re not working from an example as perfect as Heinrich’s digital (and digitally measured) skeleton, you’ll have to find something else to use for a scale bar. Something big and reasonably impervious to error is good. I like the femur, if nothing else is available. Any sort of multi-segment dimension like shoulder height or trunk length is going to be very sensitive to how much gloop someone thought should go between the bones. Total length is especially bad because it depends not only on the intervertebral spacing but also on the number of vertebrae, and even most well-known dinos do not have complete vertebral series.
Finally, multiply the volume in liters by the assumed density to get the mass of each body segment. Lots of people just go with the density of water, 1.0 kg/L, which is the same as saying a specific gravity (SG) of 1. Depending on what kind of animal you’re talking about, that may be a little bit off or it may be fairly calamitous. Colbert (1962) found SGs of 0.81 and 0.89 for an extant lizard and croc, which means an SG of 1.0 is off by between 11% and 19%. Nineteen percent–almost a fifth! For birds, it’s even worse; Hazlehurst and Rayner (1992) found an SG of 0.73.
Now, scroll back up to the diagram of the giant moa, which had a mass of 257.5 kg “assuming a specific gravity of 1″. If the moa was as light as an extant bird–and its skeleton is highly pneumatic–then it might have had a mass of only 188 kg (257.5*0.73). Or perhaps its density was higher, like that of a lizard or a croc. Without a living moa to play with, we may never know. Two points here: first, the common assumption of whole-body densities of 1.0 is demonstrably incorrect* for many animals, and second, since it’s hard to be certain about the densities of extinct animals, maybe the best thing is to try the calculation with several densities and see what results we get. (My thoughts on the plausible densities of sauropods are here.)
* Does anyone know of actual published data indicating a density of 1.0 for a terrestrial vertebrate? Or is the oft-quoted “bodies have the same density as water” basically bunk? (Note: I’m not disputing that flesh has a density close to that of water, but bones are denser and lungs and air spaces are lighter, and I want to know the mean density of the whole organism.)
Back to Plateosaurus. Using the measurements and calculations presented above, the total volume of the restored animal is 636 liters. Here are the whole body masses (in kg) we get using several different densities:
- SG=1.0 (water), 636 kg
- SG=0.89 (reptile high), 566 kg
- SG=0.81 (reptile low), 515 kg
- SG=0.73 (bird), 464 kg
I got numbers. Now what?
I’m going to describe three possible things you could do with the results once you have them. In my opinion, two of them are the wrong the thing to do and one is the right thing to do.
DON’T mistake the result of your calculation for The Right Answer. You haven’t stumbled on any universal truth. Assuming you measured enough slices and didn’t screw up the math, you know the volume of a mathematical model of an organism. If you crank all the way through the method you will always get a result, but that result is only an estimate of the volume of the real animal the model was based on. There are numerous sources of error that could plague your results, including: incomplete skeletal material, poorly articulated bones, wrong scale factor, wrong density, wrong amount of soft tissue on the skeleton. I saved density and gloop for last because you can’t do much about them; here the strength of your estimate relies on educated guesses that could themselves be wrong. In short, you don’t even know how wrong your estimate might be.
Pretty dismal, eh?
DON’T assume that the results are meaningless because you don’t know the actual fatness or the density of the animal, or because your results don’t match what you expected or what someone else got. I see this a LOT in people that have just run their first phylogenetic analysis. “Why, I could get any result I wanted just by tinkering with the input!” Well, duh! Like I said, the method will always give you an answer, and it won’t tell you whether the answer is right or not. The greatest advantage of explicit methods like cladistics and GDI is that you know what the input is, and so does everyone else if you are honest about reporting it. So if someone disagrees with your character coding or with how much the belly sags on your model sauropod, you can have a constructive discussion and hopefully science as a whole gets closer to the right answer (even if we have no way of knowing if or when we arrive, and even if your pet hypothesis gets trampled along the way).
DO be appropriately skeptical of your own results without either accepting them as gospel or throwing them out as worthless. The fact that the answer changes as you vary the parameters is a feature, not a bug. Investigate a range of possibilities, report all of those results, and feel free to argue why you think some of the results are better than others. Give people enough information to replicate your results, and compare your results to those of other workers. Figure out where yours differ and why.
Try to think of more interesting things you could do with your results. Don Henderson went from digitally slicing critters (Henderson 1999) to investigating floating sauropods (Henderson 2004) to literally putting sauropods through their paces (Henderson 2006)–not to mention working on pterosaur flight and swimming giraffes and other cool stuff. I’m not saying you should run out and do those exact things, but rather that you’re more likely to come up with something interesting if you think about what you could do with your GDI results instead of treating them as an end in themselves.
How massive was GPIT1, really?
Beats me. I’m not the only one who has done a mass estimate based on that skeleton. Gunga et al. (2007) did not one but two volumetric mass estimates based on GPIT1, and Mallison (2010) did a whole series, and they published their models so we can see how they got there. (In fact, many of you have probably been reading this post in slack-jawed horror, wondering why I was ignoring those papers and redoing the mass estimate the hard way. Now you know!) I’m going to discuss the results of Gunga et al. (2007) first, and come back to Mallison (2010) at the end.
Here’s the “slender” model of Gunga et al. 2007 (their fig. 3):
and here’s their “robust” model (Gunga et al. 2007:fig. 4):
(These look a bit…inelegant, let’s say…because they are based on the way the physical skeleton is currently mounted; Heinrich’s model looks much nicer because of his virtual remount.)
For both mass estimates they used a density of 0.8, which I think is probably on the low end of the range for prosauropods but not beyond the bounds of possibility. They got a mass of 630 kg for the slender model and 912 kg for the robust one.
Their 630-kg estimate for the slender model is deceptively close to the upper end of my range; deceptive because their 630-kg estimate assumes a density of 0.8 and my 636-kg one assumes a density of 1.0. The volumes are more directly comparable: 636 L for mine, 790 L for their slender one, and 1140 L for their robust one. I think that’s pretty good correspondence, and the differences are easily explained. My version is even more skinnier than their slender version; I made it about as svelte as it could possibly have been. I did that deliberately, because it’s always possible to pack on more soft tissue but at some point the dimensions of the skeleton establish a lower bound for how voluminous a healthy (i.e., non-starving) animal could have been. The slender model of Gunga et al. (2007) looks healthier than mine, whereas their robust version looks, to my eye, downright corpulent. But not unrealistically so; fat animals are less common than skinny ones but they are out there to be found, at least in some times and places. It pays to remember that the mass of a single individual can fluctuate wildly depending on seasonal food availability and exercise level.
For GPIT1, I think something like 500 kg is probably a realistic lower bound and 900 kg is a realistic upper bound, and the actual mass of an average individual Plateosaurus of that size was somewhere in the middle. That’s a big range–900 kg is almost twice 500 kg. It’s hard to narrow down because I really don’t know how fleshy Plateosaurus was or what it’s density might have been, and I feel less comfortable making guesses because I’ve spent much less time working on prosauropods than on sauropods. If someone put a gun to my head, I’d say that in my opinion, a bulk somewhere between that of my model and the slender model of Gunga et al. is most believable, and a density of perhaps 0.85, for a result in the neighborhood of 600 kg. But those are opinions, not hypotheses, certainly not facts.
I’m happy to see that my results are pretty close to those of Mallison (2010), who got 740 L, which is also not far off from the slender model of Gunga et al. (2007). So we’ve had at least three independent attempts at this and gotten comparable results, which hopefully means we’re at least in the right ballpark (and pessimistically means we’re all making mistakes of equal magnitude!). Heinrich’s paper is a goldmine, with loads of interesting stuff on how the skeleton articulates, what poses the animal might have been capable of, and how varying the density of different body segments affects the estimated mass and center of mass. It’s a model study and I’d happily tell you all about it but you should really read it for yourself. Since it’s freely available (yay open access!), there’s no barrier to you doing so.
So: use GDI with caution, but do use it. It’s easy, it’s cool, it’s explicit, it will give you lots to think about and give us lots to talk about. Stay tuned for related posts in the not-too-distant future.
- Gunga, H.-C., Suthau, T., Bellmann, A., Friedrich, A., Schwanebeck, T., Stoinski, S., Trippel, T., Kirsch, K., Hellwich, O. 2007. Body mass estimations for Plateosaurus engelhardti using laser scanning and 3D reconstruction methods. Naturwissenschaften 94(8):623-630.
- Henderson, D.M. 1999. Estimating the mass and centers of mass of extinct animals by 3D mathematical slicing. Paleobiology 25:88-106.
- Henderson, D.M. 2004. Tipsy punters: sauropod dinosaur pneumaticity, buoyancy and aquatic habits. Proceedings: Biological Sciences 271 (Supplement):S180-S183.
- Henderson, D.M. 2006. Burly gaits: centers of mass, stability and the trackways of sauropod dinosaurs. Journal of Vertebrate Paleontology 26:907-921.
- Hurlburt, G. 1999. Comparison of body mass estimation techniques, using Recent reptiles and the pelycosaur Edaphosaurus boanerges. Journal of Vertebrate Paleontology 19:338–350.
- Jerison, H.J. 1973. Evolution of the Brain and Intelligence. Academic Press, New York, NY, 482 pp.
- Mallison, H., Hohloch, A., and Pfretzschner, H.-U. 2009. Mechanical digitizing for paleontology–new and improved techniques. Palaeontologica Electronica 12(2):4T, 41 pp.
- Mallison, H. 2010. The digital Plateosaurus I: Body mass, mass distribution, and posture assessed by using CAD and CAE on a digitally mounted complete skeleton. Palaeontologica Electroncia 13(2):8A, 26 pp.
- Motani, R. 2001. Estimating body mass from silhouettes: testing the assumption of elliptical body cross-sections. Paleobiology 27(4):735–750.
- Murray, P.F. and Vickers-Rich, P. 2004. Magnificent Mihirungs. Indiana University Press, Bloomington, IN, 410 pp.
- Taylor, M.P. 2009. A re-evaluation of Brachiosaurus altithorax Riggs 1903 (Dinosauria, Sauropoda) and its generic separation from Giraffatitan brancai (Janensch 1914). Journal of Vertebrate Paleontology 29(3):787-806.
- Wedel, M.J. 2007. What pneumaticity tells us about ‘‘prosauropods,’’ and vice versa. Special Papers in Palaeontology 77:207–222.
October 13, 2009
UPDATE December 3, 2009
I screwed up, seriously. Tony Thulborn writes in a comment below to correct several gross errors I made in the original post. He’s right on every count. I have no defense, and I am terribly sorry, both to Tony and to everyone who ever has or ever will read this post.
He is correct that the paper in question (Thulborn et al 1994) does discuss track length, not diameter, so my ranting about that below is not just immoderate, it’s completely undeserved. I don’t know what I was thinking. I did reread the paper before I wrote the post, but I got the two switched in my mind, and I assigned blame where none existed. In particular, it was grossly unfair of me to tar Tony’s careful work with the same brush I used to lament the confused hodgepodge of measurements reported in the media (not by scientists) for the Plagne tracks.
I am also sorry that I criticized the 1994 paper and implied that the work was incomplete. I was way out of line.
I regard this post as the most serious mistake in my professional career. I want very badly to somehow unmake it. I am adding corrections to the post below and striking out but not erasing my mistakes; they will stand as a reminder of my fallibility and a warning against being so high-handed and unfair in the future.
I’m sorry. I beg forgiveness from Tony, from all of our readers, and from the broader vertebrate paleontology community. Please forgive me.
You might have seen a story last week about some huge sauropod tracks discovered in Upper Jurassic deposits from the Jura plateau in France, near the town of Plagne. According to the news reports, the tracks are the largest ever discovered. Well, let’s see.
The Guardian (from which I stole the image above) says the prints are “up to 2 metres (6ft 6 in) in diameter”, but ScienceDaily says “up to 1.5 m in total diameter”. Not sure how ‘total diameter’ is different from regular diameter, but that’s science reporting for you. The BBC clarifies that, “the depressions are about 1.5m (4.9ft) wide”, which might be the key here (see below), but then mysteriously continues, “corresponding to animals that were more than 25m long and weighed about 30 tonnes.” I find it rather unlikely that a pes track 1.5 m wide indicates an animal only as big as Giraffatitan (hence this post).
So there’s some uncertainty with respect to the diameter of the tracks–half a meter of uncertainty, to be precise. But sauropod pes tracks are usually longer than wide, and a print 1.5 m wide might actually be 2 m long.
Not incidentally, Thulborn (1994) described some big sauropod tracks from the Broome Sandstone in Australia, with pes prints up to 1.5 m. Although the photos of the tracks are not as clear as one might wish, they do appear to show digit impressions and are probably not underprints. [See Tony Thulborn's comment below regarding footprints vs underprints.]
I’ll feel a lot better about the Plagne tracks when the confusion about their dimensions is cleared up and when some evidence is presented that they also are not underprints. In any case, the only dimension with any orientation cited for the Plagne tracks is the 1.5 m width reported by the BBC, so we’ll go with that. So the Plagne tracks might only tie, but not beat, Thulborn’s tracks.
…Then again, Thulborn only said that the biggest tracks were up to 150 cm in diameter. What does that mean–length? Width? Are the tracks perfect circles? Does no one who works on giant sauropod tracks know how to report measurements? These questions will have to wait, because despite the passing of a decade and a half, the world’s (possibly second-) biggest footprints–from anything! ever!–have not yet merited a follow-up paper. [Absolutely wrong and unfair; please see the apology at top and Tony Thulborn's comment below.]
Nevertheless, for the remainder of this post we’ll accept that at least some sauropods were leaving pes prints a meter and a half wide. Naturally, it occurs to me to wonder how big those sauropods were. I don’t know of any studies that attempt to rigorously estimate the size of a sauropod from its tracks or vice versa, so in the finest tradition of the internet in general and blogging in particular, I’m going to wing it.
First we need some actual measurements of sauropod feet. When Mike and I were in Berlin last fall (gosh, almost a year ago!), we measured the feet (pedes) of the mounted Giraffatitan and Diplodocus for this very purpose. The Diplodocus feet were both 59 cm wide, and the Giraffatitan feet were 68 and 73 cm wide. The Diplodocus feet are trustworthy, the Giraffatitan bits less so. Unfortunately, the pes is the second part of the skeleton of Giraffatitan that is less well known than I would like (after the cervico-dorsal neural spines). The reconstructed feet look believable, but “believability” is hard to calibrate and probably a poor predictor of reality when working with sauropods.
One thing I won’t go into is that Giraffatitan (HM SII) probably massed more than twice what Diplodocus (CM 84/94) did, but on the other hand G. bore more of its weight on its forelimbs. It would be interesting to calculate whether the shifted center of mass would be enough to even out the pressure exerted by the hindfeet of the two animals; Don Henderson may have done this already.
Anyway, let’s say for the sake of argument that the hindfeet of the mounted Giraffatitan are sized about right. The next problem is figuring out how much soft tissue surrounded the bones. In other words, how much wider was the fleshy foot–deformed under load!–than the articulated pes skeleton? I am of two minds on this. On one hand, sauropods probaby had a big heel pad like that of elephants, and it seems reasonable that the heel pad plus the normal skin, fat, and muscle might have expanded the fleshy foot considerably beyond the edges of the bones. On the other hand, the pedal skeleton is widest across the distal ends of the phalanges, and in well-preserved tracks like the one below the fleshy foot is clearly not much wider than that (thanks, Brian, for the photo!).
Bear in mind that a liberal estimate of soft tissue will give a conservative estimate of the animal’s size, and vice versa. Looking at the AMNH track pictured above, it seems that the width added by soft tissue could possibly be as little as 5% of the width of the pes skeleton. Skewing hard in the opposite direction, an additional 20% or more does not seem unreasonable for other animals (keep in mind this would only be 10% on either side of the foot). Using those numbers, Diplodocus (CM 84/94) would have left tracks as narrow as 62 cm or as wide as 71 cm. For Giraffatitan (HM SII) I’ll use the wider of the two pes measurements, because the foot is expected to deform under load and the 73 cm wide foot looked just as believable as the 68 cm foot (for whatever that’s worth). Applying the same scale factors (1.05 and 1.20) yields a pes track width of 77-88 cm.
These numbers are like pieces of legislation, or sausages: the results are more pleasant to contemplate than the process that produced them. They’re ugly, and possibly wrong. But they give us someplace to start from in considering the possible sizes of the biggest sauropod trackmakers. Something with a hindfoot track 1.5 meters wide would be, using these numbers, conservatively more than twice as big as (2.11x) the mounted Carnegie Diplodocus or 170% the size of the mounted Berlin Giraffatitan. That’s right into Amphicoelias fragillimus/Bruhathkayosaurus territory. The diplo-Diplodocus would have been 150 feet long, and even assuming a very conservative 10 tons for Vanilla Dippy (14,000L x 0.7 kg/L = 9800 kg), would have had a mass of 94 metric tons (104 short tons). The monster Giraffatitan-like critter would have been “only” 130 feet long, but with a 14.5 meter neck and a mass of 113 metric tons (125 short tons; starting from a conservative 23 metric tons for HM SII).
Keep in mind that these are conservative estimates, for both the size of the trackmakers and the masses of the “known” critters. If we use the conservative soft tissue/liberal animal size numbers, the makers of the 1.5 meter tracks were 2.4 times as big as the mounted Diplodocus or almost twice as big as the mounted Giraffatitan, in which case masses in the blue whale range of 150-200 tons become not just probable but inevitable.
Going the other way, I can think of only a handful of ways that the “conservative” trackmaker estimates might still be too big:
First, the pes of Giraffatitan might have been bigger than reconstructed in the mounted skeleton. Looking at the photo above, I can image a pes 10% wider that wouldn’t do any violence to the “believability” of the mount. That would make the estimated track of HM SII 10% wider and the estimated size of the HM-SII-on-steroids correspondingly smaller. But that wouldn’t affect the scaled up Diplodocus estimate, and the feet of Giraffatitan would have to be a LOT bigger than reconstructed to avoid the reality of an animal at least half again as big as HM SII.
Second, the amount of soft tissue might have been greater than even the liberal soft tissue/conservative size estimate allows. But I think that piling on 20% more soft tissue than bone is already beyond what most well-preserved tracks would justify, so I’m not worried on that score. (What scares me more is the thought that the conservative estimates are too conservative, and the real trackmakers even bigger.)
Third, I suppose it is possible that sauropod feet scaled allometrically with size and that big sauropods left disproportionately big tracks. I’m also not worried about this. For one thing, when they’ve been measured sauropod appendicular elements tend to scale isometrically, and it would be weird if feet were the undiscovered exception. For another, the allometric oversizing of the feet would have to be pronounced to make much of a dent in the estimated size of the trackmakers. I find the idea of 100-ton sauropods more palatable than the idea of 70-ton sauropods with clown shoes.
Fourth, the meta-point, what if the Broome and Plagne tracks are underprints? [Please see Tony Thulborn's comment below regarding footprints and underprints.] I’ve seen some tracks-with-undertracks where the magnification of the apparent track size in the undertracks was just staggering. The Broom tracks have gotten one brief note and The Plagne tracks have not been formally described at all, so all of this noodling around about trackmaker size could go right out the window. Mind you, I don’t have any evidence that the either set are underprints, and at least for the Broome tracks the evidence seems to go the other way, I’m just trying to cover all possible bases.
So. Sauropods got big. As usual, we can’t tell exactly how big. Any one individual can leave many tracks but only one skeleton, so we might expect the track record to sample the gigapods more effectively than the skeletal record. Interestingly, the largest fragmentary skeletal remains (i.e., Amphicoelias and Bruhathkayosaurus, assuming they’re legit) and the largest tracks (i.e., Plagne and Broome) point to animals of roughly the same size.
It’s also weird that some of the biggest contenders in both categories have been so little published. I mean, if I had access to Bruhathkayosaurus or a track 1.5 m wide, you can bet that I’d be dropping everything else like a bad habit until I had the gigapod evidence properly written up. What gives? [The implication that the Broome tracks were not properly written up is both wrong and unfair; please see the apology at top.]
Finally, IF the biggest fragmentary gigapods and the biggest tracks are faithful indicators of body size, they suggest that gigapods were broadly distributed in space and time (and probably phylogeny). I wonder if these were representatives of giga-taxa, or just extremely large individuals of otherwise vanilla sauropods. Your thoughts are welcome.
Epilogue: What About Breviparopus?
It’s past time someone set the record straight about damn Breviparopus. The oft-quoted track length of 115 cm is (A) much smaller than either the Broome or Plagne tracks, and (B) the combined length of the manus and pes prints together; I know, I looked it up (Dutuit and Ouazzou 1980). Why anyone would report track “length” that way is beyond me, but what is more mysterious is why anyone was taken in by it, since the width of 50 cm (pathetic!) is usually quoted along with the 115 cm “length”, indicating an animal smaller than Vanilla Diplodocus (track length is much more likely than width to get distorted by foot motions during locomotion) [This part is wrong; see the update below.]. But people keep stumbling on crap (thanks, Guiness book!) about how at 157 feet long (determined how, exactly?) Breviparopus was possibly the largest critter to walk the planet. Puh-leeze. If there’s one fact that everyone ought to know about Breviparopus, it’s that it was smaller than the big mounted sauropods at museums worldwide. The only thing super-sized about it is the cloud of ignorance, confusion, and hype that clings to the name like cheap perfume. Here’s the Wikipedia article if you want to do some much-needed revising.
UPDATE (Nov 17 2009): The width of the Breviparopus pes tracks is 90 cm, not 50 cm. The story of the 50 cm number is typically convoluted. Many thanks to Nima Sassani for doing the detective work. Rather than steal his thunder, I’ll point you to his explanation here. Point A above is still valid: Breviparopus was dinky compared to the Broome and Plagne trackmakers.
You know I ain’t gonna raise the specter of a beast 1.7 times the size of HM SII without throwing in a photoshopped giant cervical. So here you go: me with C8 of Giraffatitan blown up to 170% (the vert, not me). Compare to unmodified original here.
- Dutuit, J.M., and A. Ouazzou. 1980. Découverte d’une piste de Dinosaure sauropode sur le site d’empreintes de Demnat (Haut-Atlas marocain). Mémoires de la Société Géologique de France, Nouvelle Série 139:95-102.
- Thulborn, R.A., T.Hamley and P.Foulkes. 1994. Preliminary report on sauropod dinosaur tracks in the Broome Sandstone (Lower Cretaceous) of Western Australia. Gaia 10:85-96.