Years ago, when I was young and stupid, I used to read papers containing phylogenetic analyses and think, “Oh, right, I see now, Euhelopus is not a mamenchisaurid after all, it’s a titanosauriform”. In other words, I believed the result that the computer spat out. Some time after that, I learned how to use PAUP* and run my own phylogenetic analysis and realised how vague and uncertain such result are, and how easily changed by tweaking a few parameters.

These days good papers that present phylogenetic analysis are very careful to frame the results as the tentative hypotheses that they are. (Except when they’re in Glam Mags, of course: there’s no space for that kind of nuance in those venues.)

It’s common now for careful work to present multiple different and contradictory phylogenetic hypotheses, arrived at by different methods or based on different matrices. For just one example, see how Upchurch et al.’s (2015) redescription of Haestasaurus (= “Pelorosaurus“) becklesii presents that animal as a camarasaurid (figure 15, arrived at by modifying the matrix of Carballido at al 2011), as a very basal macronarian (figure 16, arrived at by modifying the continuous-and-discrete-characters matrix of Mannion et al. 2013), and as a basal titanosaur (figure 17, arrived at by modifying the discrete-characters-only matrix from the same paper). This is careful and courageous reporting, shunning the potential headline “World’s oldest titanosaur!” in favour of doing the work right.) [1]

But the thing that really makes you understand how fragile phylogenetic analyses are is running one yourself. There’s no substitute for getting your hands dirty and seeing how the sausage is made.

And I was reminded of this principle today, in a completely different context, by a tweet from Alex Holcombe:

Some of us lost our trust in science, and in peer review, in a journal club. There we saw how many problems a bunch of ECRs notice in the average article published in a fancy journal.

Alex relays (with permission) this anecdote from an anonymous student in his Good Science, Bad Science class :

In the introduction of the article, the authors lay forth four very specific predictions that, upon fulfillment, would support their hypothesis. In the journal club, one participant actually joked that it read very much as though the authors ran the analysis, derived these four key findings, and then copy-pasted them in to the introduction as though they were thought of a priori. I’m not an expert in this field and I don’t intend to insinuate that anything untoward was done in the paper, but I remember several participants agreeing that the introduction and general framework of the paper indeed felt very “HARKed“.

Here’s the problem: as the original tweet points out, this is about “problems a bunch of ECRs notice in the average article published in a fancy journal”. These are articles that have made it through the peer-review gauntlet and reached the promised land of publication. Yet still these foundational problems persist. In other words, peer-review did not resolve them.

I’m most certainly not suggesting that the peer-review filter should become even more obstructive than it is now. For my money it’s already swung way too far in that direction.

But I am suggesting we should all remain sceptical of peer-reviewed articles, just as we rightly are of preprints. Peer-review ain’t nuthin’ … but it ain’t much. We know from experiment that the chance of an article passing peer review is made up of one third article quality, one third how nice the reviewer is and one third totally random noise. More recently we found that papers with a prestigious author’s name attached are far more likely to be accepted, irrespective of the content (Huber et al. 2022).

Huber et al. 2022, figure 1.

We need to get away from a mystical or superstitious view of peer-review as a divine seal of approval. We need to push back against wise-sounding pronouncements such as “Good reporting would have noted that the paper has not yet been peer-reviewed” as though this one bit of information is worth much.

Yeah, I said it.

References

Notes

  1. Although I am on the authorship of Upchurch et al. (2015), I can take none of the credit the the comprehensiveness and honesty of the phylogenetics section: all of that is Paul and Phil’s work.

 

Matt and I are writing a paper about Barosaurus cervicals (yes, again). Regular readers will recall that the best Barosaurus cervical material we have ever seen was in a prep lab for Western Paleo Labs. We have some pretty good photos, such as this one:

Barosaurus cervical vertebra lying on its right side in anterodorsal view (i.e. with dorsal to the left), showing the distinctive shape of the prezygapophyseal rami.

The problem is that this specimen was privately owned at the time we saw it, and so far as we know it still is. So according to all standard procedures, we should consider it unavailable to science until such time as it is deposited in an accredited museum. (I was pretty sure the SVP has an explicit policy to this effect, but I couldn’t find it on the site. Can anyone?)

So what should we do? All the possible courses of action seem unfortunate.

1. We could go ahead and include photos, drawings and descriptions of these vertebrae in the paper — but that would violate community norms by building an argument on observations that cannot be in general be replicated by other researchers. (For all we know, these vertebrae are now decorating Nicolas Cage’s pool room.)

2. We could omit these vertebrae from the paper, but use the information we gained from examining them in formulating our diagnostic criteria for Barosaurus cervicals — but this would also not really be replicable, plus it would have that horrible “we know something that you don’t” quality.

3. We could act as though these vertebrae do not exist, or as though we had never seen them, writing the paper based only on our observations of inferior material and of the good AMNH 6341 that is not accessible for study or photography — but that would make our characterisation of Barosaurus cervical morphology less helpful than it could be.

4. We could refrain from publishing on Barosaurus cervicals at all until such time as these vertebrae, or similarly well-preserved ones, are available to study at accredited institutions — but that would simply deprive the world of an interesting and exciting study.

Is there a fifth path that we have not seen? And if not, which of these four is the least objectionable?

Hey sports fans! I met David Lindblad at Beer ‘N Bones at the Arizona Museum of Natural History last month, and he invited me to talk dinosaurs on his podcast. So I did (LINK). For two hours. Some of what I talk about will be familiar to long-time readers – dinosaur butt-brains and the Clash of the Dinosaurs saga, for example. But I also just sorta turned off my inhibitions and let all kinds of speculative twaddle come gushing out, including the specter of sauropod polyphyly, which I don’t believe but can’t stop thinking about. David was a gracious and long-suffering host and let me yap on at length. It is more or less the kind of conversation you could have with me in a pub, if you let me do most of the talking and didn’t want to hear about anything other than dinosaurs.

Is it any good? Beats me – I’m way too close to this one to make that call. Let me know in the comments.

Oh, I didn’t have any visuals that really fit the theme so I’m recycling this cool image of speculative sauropod display structures by Brian Engh. Go check out his blog and Patreon and YouTube channel.

As we all know, University libraries have to pay expensive subscription fees to scholarly publishers such as Elsevier, Springer, Wiley and Informa, so that their researchers can read articles written by their colleagues and donated to those publishers. Controversially (and maybe illegally), when negotiating contracts with libraries, publishers often insist on confidentiality clauses — so that librarians are not allowed to disclose how much they are paying. The result is an opaque market with no downward pressure on prices, hence the current outrageously high prices, which are rising much more quickly than inflation even as publishers’ costs shrink due to the transition to electronic publishing.

On Thursday 11 April 2013, Oxford University hosted a conference called Rigour and Openness in 21st Century Science. The evening event was a debate on the subject Evolution or Revolution In Science Communication. During this debate, Stephen Curry of Imperial College noted that his librarian isn’t allowed to tell him how much they pay for Elsevier journals. This is the response of David Tempest, Elsevier’s Deputy Director of Universal Sustainable Research Access.

Heres’ a transcript

Curry [in reference to the previous answer]: I’m glad David Tempest is so interested in librarians being able to make costs transparent to their users, because at my university, Imperial College, my chief librarian can not tell me how much she pays for Elsevier journals because she’s bound by a confidentiality clause. Would you like to address that?

[Loud applause for the question]

Tempest: Well, indeed there are confidentiality clauses inherent in the system, in our Freedom Collections. The Freedom Collections do give a lot of choice and there is a lot of discount in there to the librarians. And the use, and the cost per use has been dropping dramatically, year on year. And so we have to ensure that, in order to have fair competition between different countries, that we have this level of confidentiality to make that work. Otherwise everybody would drive down, drive down, drive drive drive, and that would mean that …

[The last  part is drowned in the laughter of the audience.]

So there you have it: confidentiality clauses exist because otherwise everybody would drive down prices. And we can’t have that, can we?

(Is this extracted segment of video unfairly misrepresenting Tempest? No. To see that for yourself, I highly recommend that you watch the video of the whole debate. It’s long — nearly two hours — but well worth the time. The section I used here starts at 1:09:50.)

Counting beans

October 10, 2012

The reason most of my work is in the form of journal articles is that I didn’t know there were other ways to communicate. Now that I know that there are other and in some ways demonstrably better ways (arXiv, etc.), my enthusiasm for sending stuff to journals is flagging. Whereas before I was happy to do it and the tenure beans were a happy side-effect, now I can see that the tenure beans are in fact shackles preventing me from taking a better path.