The Researcher to Reader (R2R) conference at the start of this week featured a debate on the proposition “The venue of its publication tells us nothing useful about the quality of a paper”. I’ve already posted Toby Green’s opening statement for the proposition and Pippa Smart’s opening statement against it.

Now here is my (shorter) response in favour of the motion, which is supposed to be a response specifically to Pippa’s opening sttement against. As with Toby’s piece, I mistimed mine and ran into my (rather niggardly) three-minute limit, so I didn’t quite get to the end. But here’s the whole thing.

Here I am giving a talk on the subject “Should science always be open” back at ESOF 2014. (I don’t have any photos of me at the R2R debate, so this is the closest thing I could find.)


 

Like the Brexit debate, this is one where it’s going to be difficult to shift people’s opinions. Most of you will have come here already entrenched on one side or other of this issue. Unlike the Brexit debate, I hope this is one where evidence will be effective.

And that, really, is the issue. All of our intuition tells us, as our colleagues have argued, that high-prestige journals carry intrinsically better papers, or at least more highly cited ones — but the actual data tells us that this is not true: that papers in these journals are no more statistically powerful, and more prone to be inflated or even fraudulent. In the last few days, news has broken of a “paper mill” that has successfully seen more than 400 fake papers pass peer-review at reputable mainstream publishers despite having absolutely no underlying data. Evidently the venue of its publication tells us nothing useful about the quality of a paper.

It is nevertheless true that many scientists, especially early career researchers, spend an inordinate proportion of their time and effort desperately trying to get their work into Science and Nature, slicing and dicing substantial projects into the sparsely illustrated extended-abstract format that these journals demand, in the belief that this will help their careers. Worse, it is also true that they are often correct: publications in these venues do help careers. But that is not because of any inherent quality in the papers published there, which in many cases are of lower quality than they would have been in a different journal. Witness the many two-page descriptions of new dinosaurs that merit hundred-page monographic treatments — which they would have got in less flashy but more serious journals like PLOS ONE.

If we are scientists, or indeed humanities scholars, then we have to respect evidence ahead of our preconceptions. And once you start looking for actual data about the quality of papers in different venues, you find that there is a lot of it — and more emerging all the time. Only two days ago I heard of a new preprint by Carneiro at el. It defines an “overall reporting score”, which it describes as “an objective dimension of quality that is readily measurable [as] completeness of reporting”. When they plotted this score against the impact factor of journals they found no correlation.

We don’t expect this kind of result, so we are in danger of writing it off — just as Brexiteers write off stories about economic damage and companies moving out of Britain as “project fear”. The challenge for us is to do what Daily Mail readers perhaps can’t: to rise above our preconceptions, and to view the evidence about our publishing regimen with the same rigour and objectivity that we view the evidence in our own specialist fields.

Different journals certainly do have useful roles: as Toby explained in his opening statement, they can guide us to articles that are relevant to our subject area, pertain to our geographical area, or relate to the work of a society of interest. What they can’t guide us to is intrinsically better papers.

In The Adventure of the Copper Beeches, Arthur Conan Doyle tells us that Sherlock Holmes cries out “Data! Data! Data! I can’t make bricks without clay.” And yet in our attempts to understand the scholarly publishing system that we all interact with so extensively, we all too easily ignore the clay that is readily to hand. We can, and must do better.

And what does the data say? It tells us clearly, consistently and unambiguously that the venue of its publication tells us nothing useful about the quality of a paper.

References

  • Carneiro, Clarissa F. D., Victor G. S. Queiroz, Thiago C. Moulin, Carlos A. M. Carvalho, Clarissa B. Haas, Danielle Rayêe, David E. Henshall, Evandro A. De-Souza, Felippe Espinelli, Flávia Z. Boos, Gerson D. Guercio, Igor R. Costa, Karina L. Hajdu, Martin Modrák, Pedro B. Tan, Steven J. Burgess, Sylvia F. S. Guerra, Vanessa T. Bortoluzzi, Olavo B. Amaral. Comparing quality of reporting between preprints and peer-reviewed articles in the biomedical literature. bioRxiv 581892. doi:10.1101/581892

 

Yesterday I told you all about the Researcher to Reader (R2R) conference and its debate on the proposition “The venue of its publication tells us nothing useful about the quality of a paper”. I posted the opening statement for the proposition, which was co-written by Toby Green and me.

Now here is the opening statement against the proposition, presented by Pippa Smart of Learned Publishing, having been co-written by her and Niall Boyce of The Lancet Psychiatry.

(I’m sure it goes without saying that there is much in here that I disagree with. But I will let Pippa speak for herself and Niall without interruption for now, and discuss her argument in a later post.)

The debate in progress. I couldn’t find a photo of Pippa giving her opening statement, so here instead is her team-mate Niall giving his closing statement.


 

The proposal is that all articles or papers, from any venue, must be evaluated on their own merit and the venue of publication gives me no indicator of quality. We disagree with this assertion. To start our argument we’d like to ask, what is quality? Good quality research provides evidence that is robust, ethical, stands up to scrutiny and adheres to accepted principles of professionalism, transparency, accountability and auditability. These features not only apply to the underlying research, but also to the presentation. In addition, quality includes an element of relevance and timeliness which will make an article useful or not. And finally, quality is about standards and consistency – for example requiring authors to assert that they are all authors according to the ICMJE guidelines.

And once we agree what constitutes quality, the next question is what quality assurance do the different venues place on their content? There is a lot of content out there. Currently there are 110,479,348 DOIs registered, and the 2018 STM report states that article growth is in the region of 5% per annum with over three million articles published each year. And of course, articles can be published anywhere. In addition to journal articles, they can appear on preprint servers, on personal blogs, and social networking sites. Each different venue places its own quality standards on what they publish. Authors usually only place their “good stuff” on their personal sites, reputable journals only include items that have passed their quality assurance standards including peer review. Preprint archives only include materials that pass their criteria for inclusion.

Currently there are about 23,000 articles on bioRxiv, of which approximately a third will not be published (according to Kent Anderson’s research). This may be due to quality problems, or perhaps the authors never sought publication. So they may or may not be “quality” to me – I’d have to read every one to check. Of the two thirds that are published, they are likely to have been revised after peer review, changing the original article that exists on bioRxiv (perhaps extra experiments or reanalysis), so again, I would have to read and compare every version on bioRxiv and in the final journal to check its usefulness and quality.

A reputable journal promises me that what it publishes is of some value to the community that it serves by applying a level of independent validation. We therefore argue that the venue does provide important information about the quality of what they publish, and in particular that the journal model imposes some order on the chaos of available information. Journal selectivity answers the most basic question: “Is this worth bothering with?”

What would I have to do if I believed that the venue of publication tells me nothing useful about their publications? I could use my own judgement to check the quality of everything that has been published, but there are two problems with this: (1) I don’t have time to read every article, and (2) surely it is better to have the judgement of several people (reviewers and editors) rather than simply relying on my own bias and ability to mis-read an article.

What do journals do to make us trust their quality assurance?

1. Peer review – The use of independent experts may be flawed but it still provides a safety net that is able to discover problems. High impact journals find it somewhat easier to obtain reviews from reputable scientists. A friend of mine who works in biomedical research says that she expects to spend about two hours per article reviewing — unless it is for Nature in which case she would spend longe, about 4–5 hours on each article, and do more checking. Assuming she is not the only reviewer to take this position, it follows that Nature articles come under a higher level of pre-publication scrutiny than some other journals.

2. Editorial judgement – Editors select for the vision and mission of their journal, providing a measure of relevance and quality for their readers. For example, at Learned Publishing we are interested in articles about peer review research. But we are not interested in articles which simply describe what peer review is: this is too simplistic for our audience and would be viewed as a poor quality article. In another journal it might be useful to their community and be viewed as a high quality article. At the Lancet, in-house editors check accepted articles — checking their data and removing inflated claims of importance — adding an extra level of quality assurance for their community.

3. Corrections – Good journals correct the scholarly record with errata and retractions. And high impact journals have higher rates of retraction caused by greater visibility and scrutiny, which can be assumed to result in a “cleaner” list of publications than in journals which receive less attention — therefore making their overall content more trustworthy because it is regularly evaluated and corrected.

And quality becomes a virtuous circle. High impact journals attract more authors keen to publish in them, which allows for more selectivity — choosing only the best, most relevant and most impactful science, rather than having to accept poorer quality (smaller studies for example) to fill the issues.

So we believe that journals do provide order out of the information tsunami, and a stamp of quality assurance for their own communities. Editorial judgement attempts to find the sweet spot: both topical, and good quality research which is then moderated so that minor findings are not made to appear revolutionary. The combination of peer review and editorial judgement work together to filter content, to select only articles that are useful to their community, and to moderate excessive claims. We don’t assume that all journals get it right all the time. But some sort of quality control is surely better than none. The psychiatrist Winnicott came up with the idea of the “good enough” mother. We propose that there is a “good enough” editorial process that means readers can use these editorially-approved articles to make clinical, professional or research decisions. Of course, not every journal delivers the same level of quality assurance. Therefore there are journals I trust more than others to publish good quality – the venue of the publication informs me so that I can make a judgement about the likelihood of usefulness.

In summary, we believe that it is wrong to say that the venue tells us nothing useful about the quality of research. Unfiltered venues tell us that there is no guarantee of quality. Filtered venues tell us that there is some guarantee of reasonable quality. Filtered venues that I trust (because they have a good reputation in my community) tell me that the quality of their content is likely to match my expectations for validity, ethical standards, topicality, integrity, relevance and usefulness.

This Monday and Tuesday, I was at the R2R (Researcher to Reader) conference at BMA House in London. It’s the first time I’ve been to this, and I was there at the invitation of my old sparring partner Rick Anderson, who was organizing this year’s debate, on the proposition “The venue of its publication tells us nothing useful about the quality of a paper”.

I was one half of the team arguing in favour of the proposition, along with Toby Green, currently managing director at Coherent Digital and prevously head of publishing at the OECD for twenty years. Our opponents were Pippa Smart, publishing consultant and editor of Learned Publishing; and Niall Boyce, editor of The Lancet Psychiatry.

I’m going to blog three of the four statements that were made. (The fourth, that of Niall Boyce, is not available, as he spoke from handwritten notes.) I’ll finish this series with a fourth post summarising how the debate went, and discussing what I now think about the proposition.

But now, here is the opening statement for the proposition, co-written by Toby and me, and delivered by him.

The backs of the heads of the four R2R debaters as we watch the initial polling on the proposition. From left to right: me, Toby, Pippa, Niall.


 

What is the most significant piece of published research in recent history? One strong candidate is a paper called “Ileal-lymphoid-nodular hyperplasia, non-specific colitis, and pervasive developmental disorder in children” published in 1998. It was written by Andrew Wakefield et al., and postulated a link between the MMR vaccine and autism. This article became the launching point for the anti-vax movement, which has resulted in (among other things) 142,000 deaths from measles in 2019 alone. It has also contributed to the general decline of trust in expertise and the rise of fake news.

This article is now recognised as “not just poor science, [but] outright fraud” (BMJ). It was eventually retracted — but it did take its venue of publication 12 years to do so. Where did it appear? In The Lancet, one of the world’s most established and prestigious medical journals, its prestige quantified by a stellar Impact Factor of 59.1.

How could such a terrible paper be published by such a respected journal? Because the venue of its publication tells us nothing useful about the quality of a paper.

Retractions from prestigious venues are not restricted to rogues like Wakefield. Last month, Nobel Prize winner Frances Arnold said she was “bummed” to have to retract her 2019 paper on enzymatic synthesis of beta-lactams because the results were not reproducible. “Careful examination of the first author’s lab notebook then revealed missing contemporaneous entries and raw data for key experiments.” she explained. I.e. “oops, we prepared the paper sloppily, sooorry!”

Prof. Arnold is the first woman to be elected to all three National Academies in the USA and has been lauded by institutions as diverse as the White House, BBC and the Vatican. She even appeared as herself in the TV series, Big Bang Theory. She received widespread praise for being so open about having to retract this work — yet what does it say of the paper’s venue of publication, Science? Plainly the quality of this paper was not in the least assured by its venue of publication. Or to put it another way, the venue of its publication tells us nothing useful about the quality of a paper.

If we’re going to talk about high- and low-prestige venues, we’ll need a ranking system of some sort. The obvious ranking system is the Impact Factor — which, as Clarivate says “can be used to provide a gross approximation of the prestige of journals”. Love it or hate it, the IF has become ubiquitous, and we will reluctantly use it here as a proxy for journal prestige.

So, then: what does “quality” really mean for a research paper? And how does it relate to journal prestige?

One answer would be that a paper’s quality is to do with its methodological soundness: adherence to best practices that make its findings reliable and reproducible. One important aspect of this is statistical power: are enough observations made, and are the correlations significant enough and strong enough for the results to carry weight? We would hope that all reputable journals would consider this crucially important. Yet Brembs et al. (2013) found no association between statistical power and journal impact factor. So it seems the venue of its publication tells us nothing useful about the quality of a paper.

Or perhaps we can define “quality” operationally, something like how frequently a paper is cited — more being good, less being less good, right?. Astonishingly, given that Impact Factor is derived from citation counts, Lozano et al. (2012) showed that citation count of an individual paper is correlated only very weakly with the Impact Factor of the journal it’s published in — and that correlation has been growing yet weaker since 1990, as the rise of the WWW has made discovery of papers easier irrespective of their venue. In other words, the venue of its publication tells us nothing useful about the quality of a paper.

We might at this point ask ourselves whether there is any measurable aspect of individual papers that correlates strongly with the Impact Factor of the journal they appear in. There is: Fang et al. (2012) showed that Impact Factor has a highly significant correlation with the number of retractions for fraud or suspected fraud. Wakefield’s paper has been cited 3336 times — did the Lancet know what it was doing by delaying this paper’s retraction for so long?[1] So maybe the venue of its publication does tell us something about the quality of a paper!

Imagine if we asked 1000 random scholars to rank journals on an “degree of excellence” scale. Science and The Lancet would, I’m sure you’ll agree — like Liverpool’s football team or that one from the “great state of Kansas” recently celebrated by Trump — be placed in the journal Premier League. Yet the evidence shows — both from anecdote and hard data — that papers published in these venues are at least as vulnerable to error, poor experimental design and even outright fraud as those in less exalted venues.

But let’s look beyond journals — perhaps we’ll find a link between quality and venue elsewhere.

I’d like to tell you two stories about another venue of publication, this time, the World Bank.

In 2016, the Bill & Melinda Gates Foundation pledged $5BN to fight AIDS in Africa. Why? Well, it was all down to someone at the World Bank having the bright idea to take a copy of their latest report on AIDS in Africa to Seattle and pitch the findings and recommendations directly to Mr Gates. I often tell this story as an example of impact. I think we can agree that the quality of this report must have been pretty high. After all, it unlocked $5BN for a good cause. But, of course, you’re thinking — D’oh! It’s a World Bank report, it must be high-quality. Really?

Consider also this story: in 2014, headlines like this lit up around the world: “Literally a Third of World Bank Policy Reports Have Never, Ever Been Read Online, By Anyone” (Slate) and “World Bank learns most PDFs it produces go unread” (Sydney Morning Herald). These headlines were triggered by a working paper, written by two economists from the World Bank and published on its website. The punchline? They were wrong, the paper was very wrong. Like Prof. Arnold’s paper they were “missing contemporaneous entries and raw data”, in this case data from the World Bank’s official repository. They’d pulled the data from an old repository. If they had also used data from the Bank’s new repository they’d have found that every Bank report, however niche, had been downloaded many times. How do I know? Because I called the one guy who would know the truth, the Bank’s Publisher, Carlos Rossel, and once he’d calmed down, he told me.

So, we have two reports from the same venue: one plainly exhibiting a degree of excellence, the other painfully embarrassing (and, by the way, it still hasn’t been retracted).

Now, I bet you’re thinking, the latter is a working paper, therefore it hasn’t been peer-reviewed and so it doesn’t count. Well, the Aids in Africa report wasn’t “peer reviewed” either — in the sense we all understand — but that didn’t stop Gates reaching for his Foundation’s wallet. What about all the preprints being posted on BiorXiv and elsewhere about the Coronavirus: do they “not count”? This reminds me of a lovely headline when Cern’s paper on the discovery of the Higgs Boson finally made it into a journal some months after the results had been revealed at a packed seminar, and weeks after the paper had been posted on arXiv: “Higgs boson discovery passes peer review, becomes actual science”. Quite apart from the irony expressed by the headline writer, here’s a puzzler for you. Was the quality of this paper assured by finally being published in a journal (with an impact factor one-tenth of Science’s), or when it was posted in arXiv, or when it was presented at a seminar? Which venue assured the quality of this work?

Of course, none of them did because the venue of its publication tells us nothing about the quality of the paper. The quality is inherent in the paper itself, not in the venue where it is made public.

Wakefield paper’s lack of quality was also inherent in the paper itself and that it was published in The Lancet (and is still available on more than seventy websites) did not mean it was high quality. Or to put it another way, the venue of its publication tells us nothing useful about the quality of a paper.

So what are different venues good for? Today’s scholarly publishing system is still essentially the same as the one that Oldenburg et al started in the 17th Century. This system evolved in an environment when publishing costs were significant and grew with increased dissemination (increased demand meant higher print and delivery costs). This meant that editors had to make choices to keep costs under control — to select what to publish and what to reject. The selection criteria varied: some used geography to segment the market (The Chinese Journal of X, The European Journal of Y); some set up societies (Operational Research Society Journal) and others segmented the market by discipline (The International Journal of Neurology). These were genuinely useful distinctions to make, helping guide authors, readers and librarians to solutions for their authoring, reading and archiving needs.

Most journals pretend to use quality as a criterion to select within their niche — but isn’t it funny that there isn’t a Quality Journal of Chemistry or a Higher-Quality Journal of Physics? The real reasons for selection and rejection are of course to do with building brands and meeting business targets in terms of the number of pages published. If quality was the overarching criteria, why, like the wine harvest, don’t journals fluctuate in output each year? Down when there’s a poor-season and up when the sun shines?

If quality was the principle reason for acceptance and rejection, why is it absent from the list of most common reasons for rejection? According to Editage one of the most common reasons is because the paper didn’t fit the aims and scope of the journal. Not because the paper is of poor quality. The current publishing process isn’t a system for weeding out weak papers from prestige journals, leaving them with only the best. It’s a system for sorting stuff into “houses” which is as opaque, unaccountable and random as the Sorting Hat which confronted Harry Potter at Hogwarts. This paper to the Journal of Hufflepuff; that one to the Journal of Slytherin!

So the venue of its publication can tell us useful things about a paper: its geographical origin, its field of study, the society that endorses it. The one thing it can’t tell us is anything useful about the quality of a paper.

Note

[1] We regret this phrasing. We asked “did the Lancet know what it was doing” in the usual colloquial sense of implying a lack of competence (“he doesn’t know what he’s doing”); but as Niall Boyce rightly pointed out, it can be read as snidely implying that The Lancet knew exactly what it was doing, and deliberately delayed the retraction in order to accumulate more citations. For avoidance of doubt, that is not what we meant; we apologise for not having written more clearly.

References

We were of course not able to give references during the debate. But since our statement included several citations, we can remedy that deficiency here.

If you check out the Shiny Digital Future page on this site, where we write about scholarly publishing, open access, open data and other such matters, you will see the following:

  • 2009: 9 posts
  • 2010: 5 posts
  • 2011: 9 posts
  • 2012: 116 posts! Woah!
  • 2013: 75 posts
  • 2014: 34 posts
  • 2015: 31 posts
  • 2016, up until the end of June: 34 posts
  • 2016, July onwards: 8 posts
  • 2017: 12 posts
  • 2018: 6 posts
  • 2019: 4 posts
  • 2020: nothing yet.

In four and a half years up to the end of June, Matt and I (but mostly I) posted 290 times in the Shiny Digital Future, for an average of 64.4 posts a year (one every 5.6 days). Since then we’ve posted 30 times in a bit more than three and a half years, for an average of 8.6 posts a year (one every 42.6 days).

Shiny Digital Future posts by year (2016 split into halves)

Something happened half way through 2016 that cut my Shiny Digital Future productivity to 13% of what it was before. (And, no, I wasn’t bought off by Elsevier.)

Here’s another funny thing. My eldest son was taking his A-levels in the summer of 2016. He had got so good at the Core 4 paper in maths that he was reliably scoring 95–100% on every past paper. He took the actual exam on the morning of 24th June, and scored 65% — a mark so low that it prevented him getting an A* grade.

Well, we all know what happened on the 23rd of June 2016: the Brexit referendum. I know that opinions differ on the desirability of Brexit, but for our family it was emotionally devastating. It’s the reason Dan was so knocked sideways that he botched his Core 4 paper. It’s hung over us all to a greater or lesser extent ever since, and it’s only with the recent triumph of the “Conservative” Party1 in the 2019 General Election that I’ve finally attained the ability to think of it as Somebody Else’s Problem. There is something gloriously liberating about being so comprehensively beaten that you can just give up.

I’m not going to rehearse all the reasons why Brexit is awful — not now, not ever again. (If you have a taste for that kind of thing, I recommend Chris Grey’s Brexit Blog, which is dispassionate, informed and forensic.) I’m not going to follow Brexit commentators on Twitter, and read all the desperately depressing analysis they highlight. I’m certainly not going to blog about it myself any more. More importantly, I’m not going to let the ongoing disintegration of my country dominate my mind or my emotions. I’m walking away: because obviously absolutely nothing I say or do about it can make the slightest bit of difference.

But there is an area of policy where I can hope to make some small difference, and that is of course open science — including but not limited to open access, open data, open reviewing and how research is evaluated. That’s where my political energy should have been going for the last three years, and it’s where that energy will be going from now on.

Because so much is happening in this space right now, and we need to be thinking about it and writing about it! Ludicrously, we’ve never even written anything about Plan S even though it’s nearly eighteen months old. But so much more is going on:

Each of these developments merits its own post and discussion, and I’m sorry I don’t have the energy to do that right now.

What I offer instead is an apology for letting my energy by stolen for so long by such a stupid issue; and a promise to refocus in 2020. I’ll start shortly by writing up the R2R debate that I was involved in on Monday, on the proposition “The venue of its publication tells us nothing useful about the quality of a paper”.

 


1The more right-wing of the two large political parties in the UK is called the Conservative party, and traditionally it has adhered to small-c conservative ideals. But at the moment, it’s the exact opposite of what it says on the tin: it’s been hijacked by a radical movement that, contra Chesterton’s Fence, wants to smash everything up in the hope that whatever emerges from the chaos will be better than what we have now. It may be exciting; it may even (who knows?) prove to be right, in the end2. What it ain’t, is conversative.

2Spoiler: it won’t.

 

Challenge: can you spot the Iguanodon pelvis in this photo?

Big news: I will be at the Burpee Museum PaleoFest this year. I’m speaking at 10:30 AM on Sunday, March 8. The title of my talk is, “In the Footsteps of Giants: Finding and Excavating New Fossils of Brachiosaurus from the Lower Morrison Formation in Utah”. Brian Engh, John Foster, and ReBecca Hunt-Foster are all coauthors.

The main page for PaleoFest 2020 is here (link), and on the right side of that page there’s a block of quick links to the speaker list, daily schedules, and so on. If you’re in the Midwest and not already booked for the weekend of March 7-8, come on out and I’ll talk your legs off about dinosaurs.

The photo above is of me at a table at the Raymond M. Alf Museum Fossil Fest on February 8, 2020. It’s nothing to do with the Burpee PaleoFest, I just needed a photo of me talkin’ Brachiosaurus. And yes, you can have that t-shirt — objectively the greatest in the history of the universe — when you cut it off my cold, dead carcass. (Or you can order your own; this model is the “Retro Brontosaurus Dinosaur T-shirt” by Dinosaur Tees and the Amazon link is here.)

I swear I’m not making this up: I was recently contacted by one of our patrons, who said he’d like to support us at the SV-POW! Patreon at $10/month. We didn’t have that tier at the time, only $1/mo. and $5/mo. So to accommodate him, and any others who theoretically might like to support us at that level, we created a $10 tier. There’s a new reward to go with this tier: in addition to being acknowledged in any papers that get written as a result of a trip that you help to fund, at $10/month you’ll also get an 8×10 art print once a year, either one of my skull drawings or a photograph, signed or unsigned. Here’s the link.

Our support is up to $57/mo. That might not sound like much, but $7/mo. is $84/yr., which is what we wanted when Mike launched the Patreon so we could get rid of ads on the site. The other $50/mo. is $600/yr., which is roughly the cost of a trans-Atlantic plane ticket. So that’s already one Matt-and-Mike get-together a year to do research and write papers, in addition to any others we were going to do anyway.

What would we do with more support? More research, and more writing. I get small grants now and then, and I get a yearly travel budget from my department, but grant-writing takes time away from research and paper-writing, and the departmental travel money doesn’t cover all the things I’d like to do. For example, I skipped SVPCA in 2018 so I could visit the Carnegie last spring. That’s a tough choice, a whole conference worth of ideas and conversations that I missed out on. And Mike is basically self-funded. We’re pretty good at converting travel money into new ideas and new data, and we’re going to start doing writing retreats where we hole up someplace cheap, far from museums, field sites, and other distractions, and just write. So if you like the stuff we do, please consider supporting us–we promise not to waste your donation.

Many thanks to everyone who supports our work, and to everyone else for sitting through this post. In the spirit of giving you more than you asked for, up top is the cervicodorsal transition in Giraffatitan brancai, MB.R.2181, in my favorite, inconvenient portrait orientation. And here’s a version with the centrum lengths and posterior widths given in cm. From Janensch (1950: figs. 49 and 50).

Reference

Janensch, Werner. 1950. Die Wirbelsaule von Brachiosaurus brancai. Palaeontographica (Suppl. 7) 3: 27-93.

No, not his new Brachiosaurus humerus — his photograph of the Chicago Brachiosaurus mount, which he cut out and cleaned up seven years ago:

This image has been on quite a journey. Since Matt published this cleaned-up photo, and furnished it under the Creative Commons Attribution (CC By) licence, it has been adopted as the lead image of Wikipedia’s Brachiosaurus page [archvied]:

Consequently (I assume) it has now become Google’s top hit for brachiosaurus skeleton:

Last Saturday, Fiona and I went to Birdland, a birds-only zoo in the Cotswolds, about an hour away from where we live. The admission price also includes “Jurassic Journey”, a walking tour of a dozen or so not-very-good dinosaur models. In an interpretive centre in this area, I found this Brachiosaurus skeletal reconstruction stencilled on the wall:

I immediately knew it was the Chicago mount due to the combination of Giraffatitan anterior dorsals and Brachiosaurus posterior dorsals; but I found it more hauntingly familiar than that. A quick hunt turned up Matt’s seven-year-old post, and when I told Matt about my discovery he filled me in on its use in Wikipedia.

So this is 99% of a good story: we’re delighted that this work is out there, and has resulted in a much better Brachiosaurus image at Birdland than the rather sad-looking Stegosaurus next to it. The only slight disappointment is that I couldn’t find any sign of credit, which they really should have included given that Matt put the image out under CC By rather than in the public domain.

But as Matt said: “Even though I didn’t get credited, I’m always chuffed to see my stuff out in the world.” So true.

 

This is the Jurassic World Legacy Collection Brachiosaurus. I think it might be an exclusive at Target stores here in the US. It turns up on other sites, like Amazon and eBay, but usually from 3rd-party sellers and with a healthy up-charge. Retails for 50 bucks. I got mine for Christmas from Vicki and London. Here’s the link to Target.com if you want to check it out (we get no kickbacks from this).

I thought it would be cool to leverage this thing at outreach events to talk about the new Brachiosaurus humerus that Brian Engh found last year, which a team of us got out of the ground and safely into a museum last October (full story here). But I needed a Brachiosaurus humerus, so I made one, and in this post I’ll show you how to do the same, for next to no money.

Depending on what base you start with and what materials you use, you could build a scale model of a Brachiosaurus humerus at any size. I wanted one that would match the JWLC Brach, so I started by taking some measurements of that. Here’s what I got:

Lengths

  • Head: 45mm
  • Neck: 455mm (x 20 = 9.1m = 29’10”)
  • Torso: 320mm
  • Tail: 320mm
  • Total: 1140 (x 20 = 22.8m = 74’10”)

Heights

  • Max head height: 705mm (x 20 = 14.1m = 46’3″)
  • Withers height: 360mm (x 20 = 7.2m = 23’7″)

The neck length, total length, and head height are pretty close to the mounted Giraffatitan in Berlin. The withers are a little high, as is the bottom of the animal’s belly. I suspect that the limbs on the model are oversized by about 10%. Nevertheless, the numbers say this thing is roughly 1/20 scale.

The largest humeri of Brachiosaurus and Giraffatitan are 213cm, which is about 3mm shy of 7 feet. So a 1/20 scale humerus should be 106.5mm, or 4.2 inches, or four-and-a-quarter if you want a nice, round number.

Incidentally, Chris Pratt is 6’2″ (74 inches), and the Owen Grady action figure is 3.75″, which is 1/20 of 6’3″. So the action figure, the Brachiosaurus toy highly detailed scientific model, and a ~4.2″ humerus model will all be more or less in scale with each other.

I used a chicken humerus for my base. The vast majority of chickens in the US are slaughtered at 5 months, so they don’t get nearly big enough for their humeri to be useful for this project. Fortunately, there’s a pub in downtown Claremont, Heroes & Legends, that has giant mutant chicken hot wings, so I went there and collected chicken bones in the guise of a date. The photo above shows three right humeri (on the left) and one left humerus (on the right) after simmering and an overnight degreasing in a pot of soapy water. I used the same bone clean-up methods as in this post.

What should you do if you don’t have access to giant mutant chicken wings? My method of Brachio-mimicry involves some sculpting, so any reasonably straight bone that bells out a bit at the ends would work. You could use a drumstick in a pinch. Here are my humeri whitening in a tub of 3% hydrogen peroxide from the dollar store down the street.

Brachiosaurid humeri vary somewhat but they all have certain features in common. Here’s the right humerus of Vouivria, modified from Mannion et al. (2017: fig. 19) to show the features of interest to brachiosaur humerus-sculptors. The arrows on the far left point to a couple of corners, one where the deltopectoral crest (dpc in the figure) meets the proximal articular surface, and the other where the articular surface meets the long sweeping curve of the medial border of the humeral shaft.

Here’s a more printer-friendly version of the same diagram. Why did I use Vouivria for this instead of one of the humeri of Brachiosaurus itself? Mostly because it’s a complete humerus for which a nice multi-view was available. Runner-up in this category would have to go to the humerus of Pelorosaurus conybeari figured by Upchurch et al. (2015: fig. 18) in the Haestasaurus paper–here’s a direct link to that figure.

I knew that I’d be doing some sculpting, and I wanted a scale template to work off of, so I made these outlines from the Giraffatitan humerus figured by Janensch (1950) and reproduced by Mike in this post (middle two), and from the aforementioned Pelorosaurus conybeari humerus shown by Mike in this post (outer two). I scaled this diagram so that when printed to fill an 8.5×11 piece of printer paper, the humerus outlines would all be 4.25″–the same nice-round-number 1/20 scale target found above. Here’s a PDF version: Giraffatitan and Pelorosaurus humeri outlines for print.

Here’s the largest of my giant mutant chicken humeri, compared to the outlines. The chicken humerus isn’t bad, but it’s too short for 1/20 scale, the angles of the proximal and distal ends are almost opposite what they should be, and the deltopectoral crest is aimed out antero-laterally instead of facing straight anteriorly. Modification will be required!

Here’s my method for lengthing the humerus: I cut the midshaft of another humerus out, and swapped it in to the middle of the prospective Brachiosaurus model humerus.

To my immense irritation, I failed to get a photo of the lengthened humerus before I started sculpting on it. In the first wave of sculpting, I built up the proximal end and the deltopectoral crest, but missed some key features. On the right, I glued the proximal and distal ends of the donor humerus together; I might make this into a Haestasaurus humerus in the future.

I should mention my tools and materials. I have a Dremel but it wasn’t charged the evening I sat down to do this, so I made all the humerus cuts with a small, cheap hacksaw. I used superglue (cyanoacrylate or CA) for quick joins, and white glue (polyvinyl acetate or PVA) to patch holes, and I put gobs of PVA into the humeral shafts before sealing them up. For additive sculpting I used spackling compound, same stuff you use to patch holes in walls and ceilings, and for reductive sculpting I used sandpaper. I got most of this stuff from the dollar store.

Here we are after a second round of sculpting. The proximal end has its corners now, and the distal end is more accurately belled out, maybe even a bit too wide. It’s not a perfect replica of either the Giraffatitan or Pelorosaurus humeri, but it got sufficiently into the brachiosaurid humerus morphospace for my taste. A more patient or dedicated sculptor could probably make recognizable humeri for each brachiosaurid taxon or even specimen. I deliberately left it a bit rough in hopes that it would read as timeworn, fractured, and restored when painted and mounted. Again, a real sculptor could make some hay here by putting in fake cracks and so on.

The cheap spackling compound I picked up did not harden as much as some other I have used in the past. I had planned on sealing anyway before I painted, and for porous materials a quick, cheap sealant is white glue mixed with water. Here that coat of diluted PVA is drying, and I’m holding up a spare chicken humerus to show how far the model humerus has come.

Before painting, I drilled into the distal end with a handheld electric drill, and used a bamboo barbeque skewer as a mounting rod and handle. I hit it with a couple of coats of gray primer, then a couple of coats of black primer the next day. I could have gotten fancier with highlights and washes and so on, but I was scrambling to get this done for a public outreach event, in an already busy week.

And here’s the finished-for-now product. A couple of gold-finished cardboard gift boxes from my spare box storage gave their lids to make a temporary pedestal. When I get a version of this model that I’m really happy with, either by hacking further on this one or starting from scratch on a second, I’d love to get a wooden or stone trophy base with a little engraved plaque that looks like a proper museum exhibit, and replace the bamboo skewer with a brass rod. But for now, I’m pretty happy with this.

The idea of making dinosaurs out of chicken bones isn’t original with me. I was inspired by the wonderful books Make Your Own Dinosaur Out of Chicken Bones and T-Rex To Go, both by Chris McGowan. Used copies of both books can be had online for next to nothing, and I highly recommend them both.

If this post helps you in making your own model Brachiosaurus humerus, I’d love to see the results. Please let me know about your model in the comments, and happy building!

References

  • Janensch, Werner. 1950. Die Wirbelsaule von Brachiosaurus brancai. Palaeontographica (Suppl. 7) 3: 27-93.
  • Mannion PD, Allain R, Moine O. (2017The earliest known titanosauriform sauropod dinosaur and the evolution of BrachiosauridaePeerJ 5:e3217 https://doi.org/10.7717/peerj.3217
  • Upchurch, Paul, Philip D. Mannion and Micahel P Taylor. 2015. The Anatomy and Phylogenetic Relationships of “Pelorosaurus” becklesii (Neosauropoda, Macronaria) from the Early Cretaceous of England. PLoS ONE 10(6):e0125819. doi:10.1371/journal.pone.0125819

On today’s episode of the I Know Dino postcast, Garret interviews Brian and me about our new Brachiosaurus bones and how we got them out of the field. You should listen to the whole thing, but we’re on from 10:10 to 48:15. Here’s the link, go have fun. Many thanks to the I Know Dino crew for their interest, and to Garret for being such a patient and accommodating host. Amazingly, there is a much longer version of the interview available for I Know Dino Patreon supporters, so check that out for more Brachiosaurus yap than you are probably prepared for.

The photo is an overhead shot of me, Casey Cordes, and Yara Haridy smoothing down a plaster wrap around the middle of humerus. The 2x4s aren’t on yet, and the sun is low, so this must have been in the late afternoon on our first day in the quarry in October. Photo by Brian Engh, who perched up on top of the boulder next to the bone to get this shot.

For the context of the Brach-straction, see Part 1 of Jurassic Reimagined on Brian’s paleoart YouTube channel, and stay tuned for more.