Learn to be sceptical by seeing how the sausage is made

September 16, 2022

Years ago, when I was young and stupid, I used to read papers containing phylogenetic analyses and think, “Oh, right, I see now, Euhelopus is not a mamenchisaurid after all, it’s a titanosauriform”. In other words, I believed the result that the computer spat out. Some time after that, I learned how to use PAUP* and run my own phylogenetic analysis and realised how vague and uncertain such result are, and how easily changed by tweaking a few parameters.

These days good papers that present phylogenetic analysis are very careful to frame the results as the tentative hypotheses that they are. (Except when they’re in Glam Mags, of course: there’s no space for that kind of nuance in those venues.)

It’s common now for careful work to present multiple different and contradictory phylogenetic hypotheses, arrived at by different methods or based on different matrices. For just one example, see how Upchurch et al.’s (2015) redescription of Haestasaurus (= “Pelorosaurus“) becklesii presents that animal as a camarasaurid (figure 15, arrived at by modifying the matrix of Carballido at al 2011), as a very basal macronarian (figure 16, arrived at by modifying the continuous-and-discrete-characters matrix of Mannion et al. 2013), and as a basal titanosaur (figure 17, arrived at by modifying the discrete-characters-only matrix from the same paper). This is careful and courageous reporting, shunning the potential headline “World’s oldest titanosaur!” in favour of doing the work right.) [1]

But the thing that really makes you understand how fragile phylogenetic analyses are is running one yourself. There’s no substitute for getting your hands dirty and seeing how the sausage is made.

And I was reminded of this principle today, in a completely different context, by a tweet from Alex Holcombe:

Some of us lost our trust in science, and in peer review, in a journal club. There we saw how many problems a bunch of ECRs notice in the average article published in a fancy journal.

Alex relays (with permission) this anecdote from an anonymous student in his Good Science, Bad Science class :

In the introduction of the article, the authors lay forth four very specific predictions that, upon fulfillment, would support their hypothesis. In the journal club, one participant actually joked that it read very much as though the authors ran the analysis, derived these four key findings, and then copy-pasted them in to the introduction as though they were thought of a priori. I’m not an expert in this field and I don’t intend to insinuate that anything untoward was done in the paper, but I remember several participants agreeing that the introduction and general framework of the paper indeed felt very “HARKed“.

Here’s the problem: as the original tweet points out, this is about “problems a bunch of ECRs notice in the average article published in a fancy journal”. These are articles that have made it through the peer-review gauntlet and reached the promised land of publication. Yet still these foundational problems persist. In other words, peer-review did not resolve them.

I’m most certainly not suggesting that the peer-review filter should become even more obstructive than it is now. For my money it’s already swung way too far in that direction.

But I am suggesting we should all remain sceptical of peer-reviewed articles, just as we rightly are of preprints. Peer-review ain’t nuthin’ … but it ain’t much. We know from experiment that the chance of an article passing peer review is made up of one third article quality, one third how nice the reviewer is and one third totally random noise. More recently we found that papers with a prestigious author’s name attached are far more likely to be accepted, irrespective of the content (Huber et al. 2022).

Huber et al. 2022, figure 1.

We need to get away from a mystical or superstitious view of peer-review as a divine seal of approval. We need to push back against wise-sounding pronouncements such as “Good reporting would have noted that the paper has not yet been peer-reviewed” as though this one bit of information is worth much.

Yeah, I said it.

References

Notes

  1. Although I am on the authorship of Upchurch et al. (2015), I can take none of the credit the the comprehensiveness and honesty of the phylogenetics section: all of that is Paul and Phil’s work.

 

One Response to “Learn to be sceptical by seeing how the sausage is made”

  1. Pedro Silva Says:

    I am just flabbergasted how the authors of the paper you discussed got over 500 peer-review reports for a single paper and > 20% of the review invitations accepted in all conditions. In my work as editor, I seldom get a 10% invitation acceptance, in spite of targetting all my invitations to people who work in the precise area/subject of the manuscript to review…. It appears that people in Economics departments are much more generous peer-review-wise than biochemists


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: