Last spring I was an invited speaker at PaleoFest at the Burpee Museum of Natural History in Rockford, Illinois. I meant to get these photos posted right after I got back. But I flew back from Illinois on Monday, March 9, 2020, and by the following weekend I was throwing together virtual anatomy labs for the med students. You know the rest. 

The wall of ceratopsians at the Burpee Museum. Every museum should have one of these.

I had a fantastic time at PaleoFest. The hosts were awesome, the talks were great, the Burpee is a cool museum to explore, and the swag was phenomenal.

An ontogenetic series of Triceratops skulls. Check out how the bony horn cores switch from back-curving to forward-curving. The keratin sheaths over the horn cores elongated, but they didn’t remodel, so adult trikes probably had S-curving horns.

I know I poke a lot of fun at non-sauropods around here, but the truth is that I’m a pan-dino-geek at heart. When I’m looking at theropods and ceratopsians I am mostly uncontaminated by specialist knowledge or a desire to work on them, so I can relax, and squee the good squee.

I’m a sucker for dinosaur skin. It’s just mind-blowing that we can tell more or less what it would feel like to pet a dinosaur.

Among the memorable talks last year: Win McLaughlin educated me about rhinos, which are a heck of a lot weirder than I thought; Larisa DeSantis gave a mind-expanding talk about mammalian diets, evolution, and environmental change; and Holly Woodward explained in convincing detail why “Nanotyrannus” is a juvenile T. rex.

The pride of the Burpee Museum: Jane, the juvenile T. rex.

But my favorite presentation of the conference was Susie Maidment’s talk on stegosaurs. It was one of the those great talks in which the questions I had after seeing one slide were answered on the next slide, and where by end of the presentation I had absorbed a ton of new information almost effortlessly, by  just listening to an enthusiastic person talk almost conversationally about their topic. And when I say “effortlessly”, I mean for the audience–I know from long experience that presentations like that are born from deep, thorough knowledge of one’s topic, deliberate planning, and rehearsal.

The big T. rex mount is pretty great, too.

That’s not to slight the other speakers, of course. All the talks were good, and that’s not an easy thing to pull off. Full credit to Josh Matthews and the organizing committee for putting on such an engaging and inspiring conference.

Did I say the swag was phenomenal? The swag was phenomenal. Above are just a few of my favorite things: a Burpee-plated Rite-in-the-Rain field notebook, a fridge magnet, a cool sticker, and at the center, My Precious: a personalized Estwing rock hammer. Estwing makes nice stuff, and a lot of paleontologists and field geologists carry Estwing rock hammers. Estwing is also based in Rockford, and they’ve partnered with the Burpee Museum to make these personalized rock hammers for PaleoFest, which is pretty darned awesome.

I already had an Estwing hammer–one of blue-grip models–which is good, because the engraved one is going in my office, not to the field. (If you’re wondering why my field hammer looks so suspiciously unworn, it’s because my original was stolen a few years ago, and I’m still breaking this one in. By doing stuff like this.)

There’s a little Burpee logo with a silhouette of Jane down at the end of the handle, so I had to take Jane to meet Jane.

Parting shot: I grew up in a house out in the country, about 2 miles outside of the tiny town of Hillsdale, Oklahoma, which is about 20 miles north of Enid, which is about 100 miles north-northwest of Oklahoma City. Hillsdale is less than an hour from Salt Plains National Wildlife Refuge, where you can go dig for selenite crystals like the ones shown above. The digging is only allowed in designated areas, to avoid unexploded ordnance from when the salt plains were used as a bombing range in World War II, and at certain times of year, to avoid bothering the endangered whooping cranes that nest there.

I don’t know how many times I went to Salt Plains to dig crystals as a kid, either on family outings or school field trips, but it was a lot. I still have a tub of them out in the garage (little ones, nothing like museum-quality). And there are nice samples, like the one shown above, in the mineral hall of just about every big natural history museum on the planet. One of my favorite things to do when I visit a new museum is go cruise the mineral display and find the selenite crystals from Salt Plains. I’ve seen Salt Plains selenite in London, Berlin, and Vienna, and in most of the US natural history museums that I’ve visited for research or for fun. The farm boy in me still gets a little thrill at seeing a little piece of northwest Oklahoma, from a place that I’ve been and dug, on display in far-flung cities.

I already credited Josh Matthews for organizing a fabulous conference, but I need to thank him for being such a gracious host. He helped me arrange transportation, saw that all my needs were met, kept me plied with food and drink, and drove me to Chicago, along with a bunch of other folks, for a Field Museum visit before my flight home, which is how I got this awesome photo, and also these awesome photos. Thanks also to my fellow speakers, for many fascinating conversations, and to the PaleoFest audience, for bringing their A game and asking good questions. I didn’t know that PaleoFest 2020 would be my last conference for a while, but it was certainly a good one to go out on.

Darren has written a brief review of TetZooMCon, the online event that replaced the now traditional annual conference of Tetrapod Zoology. I just want to add a few notes on the palaeoart workshop part of the event, hosted by John Conway’s moustache:

There were 140 people registered for the workship, randomly allocated to one of fourteen palaeoartists leading the groups (although one artist didn’t show up). After John’s brief introduction, each of the groups met in its own breakout room to work on … well, whatever the leader chose.

There was an amazing line-up of artists, a real Who’s Who of the field, encompassing wildly different styles and including but not limited to Scott Hartman, David Krentz, Bob Nicholls, Steve White and Mark Witton. Some led workshops on colour, some on 3D modelling, some on integument and so on.

Happily, I landed in a session that was perfect for me, as a non-artist trying to pick up some essentials. Steve White (whose pen work I absolutely love) led us through drawing a T. rex with proper attention to anatomy, with each of us encouraged to draw along with him. For me it was an education in thinking about how details of the bony anatomy would have influenced musculature, and how that might have been apparent in the living animal. Here is my lame attempt:

Yes, I know all sorts of thing are off with the proportions. But the point here was the process, not the result. And yes, it’s a bit shrinkwrapped in places, but that’s because of the exercise we were going through rather than necessarily reflecting how anyone thinks the animal looked in life.

I found this enormously helpful, and would happily have carried on far beyond the rather miserly one hour allocated to the workshop. I want to thank Darren and John for putting the whole event together, and especially Steve White for leading our group so well and responding so helpfully to all our questions.

I’ve written four posts about the R2R debate on the proposition “the venue of its publication tells us nothing useful about the quality of a paper”:

A debate of this kind is partly intended to persuade and inform, but is primarily entertainment — and so it’s necessary to stick to the position you’ve been assigned. But I don’t mind admitting, once the votes have been counted, that the statement goes a bit further than I would go in real life.

It took me a while to figure out exactly what I did think about the proposition, and the process of the debate was helpful in getting me the point where I felt able to articulate it clearly. Here is where I landed shortly after the debate:

The venue of its publication can tell us something useful about a paper’s quality; but the quality of publication venues is not correlated with their prestige (or Impact Factor).

I’m fairly happy with this formulation: and in fact, on revisiting my speech in support of the original proposition, it’s apparent that I was really speaking in support of this modified version. I make no secret of the fact that I think some journals are objectively better than others; but that those with higher impact factors are often worse, not better.

What are the things that make a journal good? Here are a few:

  • Coherent narrative order, with methods preceding results.
  • All relevant information in one place, not split between a main document and a supplement.
  • Explicit methods.
  • Large, clear illustrations that can be downloaded at full resolution as prepared by the authors.
  • All data available, including specimen photos, 3D models, etc.
  • Open peer review: availability of the full history of submissions, reviews, editorial responses, rebuttal letters, etc.
  • Well designed experiment capable of replication.
  • Honesty (i.e. no fabicated or cherry-picked) data.
  • Sample sizes big enough to show real statistical effect.
  • Realistic assessment of the significance of the work.

And the more I look at such lists, the more I realise that that these quality indicators appear less often in “prestige” venues such as Science, Nature and Cell than they do in good, honest, working journals like PeerJ, Acta Palaeontologica Polonica or even our old friend the Journal of Vertebrate Paleontology. (Note: I am aware that the replication and statistical power criteria listed above generally don’t apply directly to vertebrate palaeontology papers.)

So where are we left?

I think — and I admit that I find this surprising — the upshot is this:

The venue of its publication can tell us something useful about a paper’s quality; but the quality of publication venues is inversely correlated with their prestige (or Impact Factor).

I honestly didn’t see that coming.

It’s been a while, but to be fair the world has caught fire since I first started posting about the Research to Reader conference. Stay safe, folks. Don’t meet people. Stay indoors; or go outdoors where there’s no-one else. You know how it’s done by now. This is not a drill.

Anyway — I am delighted to announce that the R2R conference has now made available the video of the debate — as part of a playlist that is slowly filling up with videos of all the conference’s sessions and workshops.

So here it is!

Here’s how the timeline breaks down:

  • 0:18 — Mark Carden (pre-introduction)
  • 0:46 — Rick Anderson (introduction and initial vote)
  • 5:12 — Toby Green (proposing the motion)
  • 15:50 — Pippa Smart (opposing the motion)
  • 25:01 — Mike Taylor (responding for the motion)
  • 28:31 — Niall Boyce (responding for the opposition)
  • 31:34 — discussion
    • 32:09 — Tasha Mellins-Cohen; response from Pippa
    • 33:20 — Anthony Watkinson; Pippa
    • 35:15 — Catriona McCallum; Niall
    • 39:19 — anonymous online question; Mike
    • 39:56 — anonymous online question; Mike; Niall; Toby; Pippa; Mike
    • 46:27 — Robert Harrington; Mike; Toby
    • 47:38 — Kaveh Bazargan; Niall; Mike; Niall; Pippa; Mike; Pippa
    • 52:30 — Jennifer Smith; Pippa; Mike
  • 58:32 — Rick Anderson (wrap up and final vote)
  • 1:00:45 — Mark Carden (closing remarks)

A notable quality of the discussion that makes up the second half of this hour is that the two teams become gradually more concilatory as it progresses.

Anyway, enjoy! And let us know whether you found the argument for or against the proposition compelling!

 

The Researcher to Reader (R2R) conference at the start of this week featured a debate on the proposition “The venue of its publication tells us nothing useful about the quality of a paper”. I’ve already posted Toby Green’s opening statement for the proposition and Pippa Smart’s opening statement against it.

Now here is my (shorter) response in favour of the motion, which is supposed to be a response specifically to Pippa’s opening sttement against. As with Toby’s piece, I mistimed mine and ran into my (rather niggardly) three-minute limit, so I didn’t quite get to the end. But here’s the whole thing.

Here I am giving a talk on the subject “Should science always be open” back at ESOF 2014. (I don’t have any photos of me at the R2R debate, so this is the closest thing I could find.)


 

Like the Brexit debate, this is one where it’s going to be difficult to shift people’s opinions. Most of you will have come here already entrenched on one side or other of this issue. Unlike the Brexit debate, I hope this is one where evidence will be effective.

And that, really, is the issue. All of our intuition tells us, as our colleagues have argued, that high-prestige journals carry intrinsically better papers, or at least more highly cited ones — but the actual data tells us that this is not true: that papers in these journals are no more statistically powerful, and more prone to be inflated or even fraudulent. In the last few days, news has broken of a “paper mill” that has successfully seen more than 400 fake papers pass peer-review at reputable mainstream publishers despite having absolutely no underlying data. Evidently the venue of its publication tells us nothing useful about the quality of a paper.

It is nevertheless true that many scientists, especially early career researchers, spend an inordinate proportion of their time and effort desperately trying to get their work into Science and Nature, slicing and dicing substantial projects into the sparsely illustrated extended-abstract format that these journals demand, in the belief that this will help their careers. Worse, it is also true that they are often correct: publications in these venues do help careers. But that is not because of any inherent quality in the papers published there, which in many cases are of lower quality than they would have been in a different journal. Witness the many two-page descriptions of new dinosaurs that merit hundred-page monographic treatments — which they would have got in less flashy but more serious journals like PLOS ONE.

If we are scientists, or indeed humanities scholars, then we have to respect evidence ahead of our preconceptions. And once you start looking for actual data about the quality of papers in different venues, you find that there is a lot of it — and more emerging all the time. Only two days ago I heard of a new preprint by Carneiro at el. It defines an “overall reporting score”, which it describes as “an objective dimension of quality that is readily measurable [as] completeness of reporting”. When they plotted this score against the impact factor of journals they found no correlation.

We don’t expect this kind of result, so we are in danger of writing it off — just as Brexiteers write off stories about economic damage and companies moving out of Britain as “project fear”. The challenge for us is to do what Daily Mail readers perhaps can’t: to rise above our preconceptions, and to view the evidence about our publishing regimen with the same rigour and objectivity that we view the evidence in our own specialist fields.

Different journals certainly do have useful roles: as Toby explained in his opening statement, they can guide us to articles that are relevant to our subject area, pertain to our geographical area, or relate to the work of a society of interest. What they can’t guide us to is intrinsically better papers.

In The Adventure of the Copper Beeches, Arthur Conan Doyle tells us that Sherlock Holmes cries out “Data! Data! Data! I can’t make bricks without clay.” And yet in our attempts to understand the scholarly publishing system that we all interact with so extensively, we all too easily ignore the clay that is readily to hand. We can, and must do better.

And what does the data say? It tells us clearly, consistently and unambiguously that the venue of its publication tells us nothing useful about the quality of a paper.

References

  • Carneiro, Clarissa F. D., Victor G. S. Queiroz, Thiago C. Moulin, Carlos A. M. Carvalho, Clarissa B. Haas, Danielle Rayêe, David E. Henshall, Evandro A. De-Souza, Felippe Espinelli, Flávia Z. Boos, Gerson D. Guercio, Igor R. Costa, Karina L. Hajdu, Martin Modrák, Pedro B. Tan, Steven J. Burgess, Sylvia F. S. Guerra, Vanessa T. Bortoluzzi, Olavo B. Amaral. Comparing quality of reporting between preprints and peer-reviewed articles in the biomedical literature. bioRxiv 581892. doi:10.1101/581892

 

Yesterday I told you all about the Researcher to Reader (R2R) conference and its debate on the proposition “The venue of its publication tells us nothing useful about the quality of a paper”. I posted the opening statement for the proposition, which was co-written by Toby Green and me.

Now here is the opening statement against the proposition, presented by Pippa Smart of Learned Publishing, having been co-written by her and Niall Boyce of The Lancet Psychiatry.

(I’m sure it goes without saying that there is much in here that I disagree with. But I will let Pippa speak for herself and Niall without interruption for now, and discuss her argument in a later post.)

The debate in progress. I couldn’t find a photo of Pippa giving her opening statement, so here instead is her team-mate Niall giving his closing statement.


 

The proposal is that all articles or papers, from any venue, must be evaluated on their own merit and the venue of publication gives me no indicator of quality. We disagree with this assertion. To start our argument we’d like to ask, what is quality? Good quality research provides evidence that is robust, ethical, stands up to scrutiny and adheres to accepted principles of professionalism, transparency, accountability and auditability. These features not only apply to the underlying research, but also to the presentation. In addition, quality includes an element of relevance and timeliness which will make an article useful or not. And finally, quality is about standards and consistency – for example requiring authors to assert that they are all authors according to the ICMJE guidelines.

And once we agree what constitutes quality, the next question is what quality assurance do the different venues place on their content? There is a lot of content out there. Currently there are 110,479,348 DOIs registered, and the 2018 STM report states that article growth is in the region of 5% per annum with over three million articles published each year. And of course, articles can be published anywhere. In addition to journal articles, they can appear on preprint servers, on personal blogs, and social networking sites. Each different venue places its own quality standards on what they publish. Authors usually only place their “good stuff” on their personal sites, reputable journals only include items that have passed their quality assurance standards including peer review. Preprint archives only include materials that pass their criteria for inclusion.

Currently there are about 23,000 articles on bioRxiv, of which approximately a third will not be published (according to Kent Anderson’s research). This may be due to quality problems, or perhaps the authors never sought publication. So they may or may not be “quality” to me – I’d have to read every one to check. Of the two thirds that are published, they are likely to have been revised after peer review, changing the original article that exists on bioRxiv (perhaps extra experiments or reanalysis), so again, I would have to read and compare every version on bioRxiv and in the final journal to check its usefulness and quality.

A reputable journal promises me that what it publishes is of some value to the community that it serves by applying a level of independent validation. We therefore argue that the venue does provide important information about the quality of what they publish, and in particular that the journal model imposes some order on the chaos of available information. Journal selectivity answers the most basic question: “Is this worth bothering with?”

What would I have to do if I believed that the venue of publication tells me nothing useful about their publications? I could use my own judgement to check the quality of everything that has been published, but there are two problems with this: (1) I don’t have time to read every article, and (2) surely it is better to have the judgement of several people (reviewers and editors) rather than simply relying on my own bias and ability to mis-read an article.

What do journals do to make us trust their quality assurance?

1. Peer review – The use of independent experts may be flawed but it still provides a safety net that is able to discover problems. High impact journals find it somewhat easier to obtain reviews from reputable scientists. A friend of mine who works in biomedical research says that she expects to spend about two hours per article reviewing — unless it is for Nature in which case she would spend longe, about 4–5 hours on each article, and do more checking. Assuming she is not the only reviewer to take this position, it follows that Nature articles come under a higher level of pre-publication scrutiny than some other journals.

2. Editorial judgement – Editors select for the vision and mission of their journal, providing a measure of relevance and quality for their readers. For example, at Learned Publishing we are interested in articles about peer review research. But we are not interested in articles which simply describe what peer review is: this is too simplistic for our audience and would be viewed as a poor quality article. In another journal it might be useful to their community and be viewed as a high quality article. At the Lancet, in-house editors check accepted articles — checking their data and removing inflated claims of importance — adding an extra level of quality assurance for their community.

3. Corrections – Good journals correct the scholarly record with errata and retractions. And high impact journals have higher rates of retraction caused by greater visibility and scrutiny, which can be assumed to result in a “cleaner” list of publications than in journals which receive less attention — therefore making their overall content more trustworthy because it is regularly evaluated and corrected.

And quality becomes a virtuous circle. High impact journals attract more authors keen to publish in them, which allows for more selectivity — choosing only the best, most relevant and most impactful science, rather than having to accept poorer quality (smaller studies for example) to fill the issues.

So we believe that journals do provide order out of the information tsunami, and a stamp of quality assurance for their own communities. Editorial judgement attempts to find the sweet spot: both topical, and good quality research which is then moderated so that minor findings are not made to appear revolutionary. The combination of peer review and editorial judgement work together to filter content, to select only articles that are useful to their community, and to moderate excessive claims. We don’t assume that all journals get it right all the time. But some sort of quality control is surely better than none. The psychiatrist Winnicott came up with the idea of the “good enough” mother. We propose that there is a “good enough” editorial process that means readers can use these editorially-approved articles to make clinical, professional or research decisions. Of course, not every journal delivers the same level of quality assurance. Therefore there are journals I trust more than others to publish good quality – the venue of the publication informs me so that I can make a judgement about the likelihood of usefulness.

In summary, we believe that it is wrong to say that the venue tells us nothing useful about the quality of research. Unfiltered venues tell us that there is no guarantee of quality. Filtered venues tell us that there is some guarantee of reasonable quality. Filtered venues that I trust (because they have a good reputation in my community) tell me that the quality of their content is likely to match my expectations for validity, ethical standards, topicality, integrity, relevance and usefulness.

This Monday and Tuesday, I was at the R2R (Researcher to Reader) conference at BMA House in London. It’s the first time I’ve been to this, and I was there at the invitation of my old sparring partner Rick Anderson, who was organizing this year’s debate, on the proposition “The venue of its publication tells us nothing useful about the quality of a paper”.

I was one half of the team arguing in favour of the proposition, along with Toby Green, currently managing director at Coherent Digital and prevously head of publishing at the OECD for twenty years. Our opponents were Pippa Smart, publishing consultant and editor of Learned Publishing; and Niall Boyce, editor of The Lancet Psychiatry.

I’m going to blog three of the four statements that were made. (The fourth, that of Niall Boyce, is not available, as he spoke from handwritten notes.) I’ll finish this series with a fourth post summarising how the debate went, and discussing what I now think about the proposition.

But now, here is the opening statement for the proposition, co-written by Toby and me, and delivered by him.

The backs of the heads of the four R2R debaters as we watch the initial polling on the proposition. From left to right: me, Toby, Pippa, Niall.


 

What is the most significant piece of published research in recent history? One strong candidate is a paper called “Ileal-lymphoid-nodular hyperplasia, non-specific colitis, and pervasive developmental disorder in children” published in 1998. It was written by Andrew Wakefield et al., and postulated a link between the MMR vaccine and autism. This article became the launching point for the anti-vax movement, which has resulted in (among other things) 142,000 deaths from measles in 2019 alone. It has also contributed to the general decline of trust in expertise and the rise of fake news.

This article is now recognised as “not just poor science, [but] outright fraud” (BMJ). It was eventually retracted — but it did take its venue of publication 12 years to do so. Where did it appear? In The Lancet, one of the world’s most established and prestigious medical journals, its prestige quantified by a stellar Impact Factor of 59.1.

How could such a terrible paper be published by such a respected journal? Because the venue of its publication tells us nothing useful about the quality of a paper.

Retractions from prestigious venues are not restricted to rogues like Wakefield. Last month, Nobel Prize winner Frances Arnold said she was “bummed” to have to retract her 2019 paper on enzymatic synthesis of beta-lactams because the results were not reproducible. “Careful examination of the first author’s lab notebook then revealed missing contemporaneous entries and raw data for key experiments.” she explained. I.e. “oops, we prepared the paper sloppily, sooorry!”

Prof. Arnold is the first woman to be elected to all three National Academies in the USA and has been lauded by institutions as diverse as the White House, BBC and the Vatican. She even appeared as herself in the TV series, Big Bang Theory. She received widespread praise for being so open about having to retract this work — yet what does it say of the paper’s venue of publication, Science? Plainly the quality of this paper was not in the least assured by its venue of publication. Or to put it another way, the venue of its publication tells us nothing useful about the quality of a paper.

If we’re going to talk about high- and low-prestige venues, we’ll need a ranking system of some sort. The obvious ranking system is the Impact Factor — which, as Clarivate says “can be used to provide a gross approximation of the prestige of journals”. Love it or hate it, the IF has become ubiquitous, and we will reluctantly use it here as a proxy for journal prestige.

So, then: what does “quality” really mean for a research paper? And how does it relate to journal prestige?

One answer would be that a paper’s quality is to do with its methodological soundness: adherence to best practices that make its findings reliable and reproducible. One important aspect of this is statistical power: are enough observations made, and are the correlations significant enough and strong enough for the results to carry weight? We would hope that all reputable journals would consider this crucially important. Yet Brembs et al. (2013) found no association between statistical power and journal impact factor. So it seems the venue of its publication tells us nothing useful about the quality of a paper.

Or perhaps we can define “quality” operationally, something like how frequently a paper is cited — more being good, less being less good, right?. Astonishingly, given that Impact Factor is derived from citation counts, Lozano et al. (2012) showed that citation count of an individual paper is correlated only very weakly with the Impact Factor of the journal it’s published in — and that correlation has been growing yet weaker since 1990, as the rise of the WWW has made discovery of papers easier irrespective of their venue. In other words, the venue of its publication tells us nothing useful about the quality of a paper.

We might at this point ask ourselves whether there is any measurable aspect of individual papers that correlates strongly with the Impact Factor of the journal they appear in. There is: Fang et al. (2012) showed that Impact Factor has a highly significant correlation with the number of retractions for fraud or suspected fraud. Wakefield’s paper has been cited 3336 times — did the Lancet know what it was doing by delaying this paper’s retraction for so long?[1] So maybe the venue of its publication does tell us something about the quality of a paper!

Imagine if we asked 1000 random scholars to rank journals on an “degree of excellence” scale. Science and The Lancet would, I’m sure you’ll agree — like Liverpool’s football team or that one from the “great state of Kansas” recently celebrated by Trump — be placed in the journal Premier League. Yet the evidence shows — both from anecdote and hard data — that papers published in these venues are at least as vulnerable to error, poor experimental design and even outright fraud as those in less exalted venues.

But let’s look beyond journals — perhaps we’ll find a link between quality and venue elsewhere.

I’d like to tell you two stories about another venue of publication, this time, the World Bank.

In 2016, the Bill & Melinda Gates Foundation pledged $5BN to fight AIDS in Africa. Why? Well, it was all down to someone at the World Bank having the bright idea to take a copy of their latest report on AIDS in Africa to Seattle and pitch the findings and recommendations directly to Mr Gates. I often tell this story as an example of impact. I think we can agree that the quality of this report must have been pretty high. After all, it unlocked $5BN for a good cause. But, of course, you’re thinking — D’oh! It’s a World Bank report, it must be high-quality. Really?

Consider also this story: in 2014, headlines like this lit up around the world: “Literally a Third of World Bank Policy Reports Have Never, Ever Been Read Online, By Anyone” (Slate) and “World Bank learns most PDFs it produces go unread” (Sydney Morning Herald). These headlines were triggered by a working paper, written by two economists from the World Bank and published on its website. The punchline? They were wrong, the paper was very wrong. Like Prof. Arnold’s paper they were “missing contemporaneous entries and raw data”, in this case data from the World Bank’s official repository. They’d pulled the data from an old repository. If they had also used data from the Bank’s new repository they’d have found that every Bank report, however niche, had been downloaded many times. How do I know? Because I called the one guy who would know the truth, the Bank’s Publisher, Carlos Rossel, and once he’d calmed down, he told me.

So, we have two reports from the same venue: one plainly exhibiting a degree of excellence, the other painfully embarrassing (and, by the way, it still hasn’t been retracted).

Now, I bet you’re thinking, the latter is a working paper, therefore it hasn’t been peer-reviewed and so it doesn’t count. Well, the Aids in Africa report wasn’t “peer reviewed” either — in the sense we all understand — but that didn’t stop Gates reaching for his Foundation’s wallet. What about all the preprints being posted on BiorXiv and elsewhere about the Coronavirus: do they “not count”? This reminds me of a lovely headline when Cern’s paper on the discovery of the Higgs Boson finally made it into a journal some months after the results had been revealed at a packed seminar, and weeks after the paper had been posted on arXiv: “Higgs boson discovery passes peer review, becomes actual science”. Quite apart from the irony expressed by the headline writer, here’s a puzzler for you. Was the quality of this paper assured by finally being published in a journal (with an impact factor one-tenth of Science’s), or when it was posted in arXiv, or when it was presented at a seminar? Which venue assured the quality of this work?

Of course, none of them did because the venue of its publication tells us nothing about the quality of the paper. The quality is inherent in the paper itself, not in the venue where it is made public.

Wakefield paper’s lack of quality was also inherent in the paper itself and that it was published in The Lancet (and is still available on more than seventy websites) did not mean it was high quality. Or to put it another way, the venue of its publication tells us nothing useful about the quality of a paper.

So what are different venues good for? Today’s scholarly publishing system is still essentially the same as the one that Oldenburg et al started in the 17th Century. This system evolved in an environment when publishing costs were significant and grew with increased dissemination (increased demand meant higher print and delivery costs). This meant that editors had to make choices to keep costs under control — to select what to publish and what to reject. The selection criteria varied: some used geography to segment the market (The Chinese Journal of X, The European Journal of Y); some set up societies (Operational Research Society Journal) and others segmented the market by discipline (The International Journal of Neurology). These were genuinely useful distinctions to make, helping guide authors, readers and librarians to solutions for their authoring, reading and archiving needs.

Most journals pretend to use quality as a criterion to select within their niche — but isn’t it funny that there isn’t a Quality Journal of Chemistry or a Higher-Quality Journal of Physics? The real reasons for selection and rejection are of course to do with building brands and meeting business targets in terms of the number of pages published. If quality was the overarching criteria, why, like the wine harvest, don’t journals fluctuate in output each year? Down when there’s a poor-season and up when the sun shines?

If quality was the principle reason for acceptance and rejection, why is it absent from the list of most common reasons for rejection? According to Editage one of the most common reasons is because the paper didn’t fit the aims and scope of the journal. Not because the paper is of poor quality. The current publishing process isn’t a system for weeding out weak papers from prestige journals, leaving them with only the best. It’s a system for sorting stuff into “houses” which is as opaque, unaccountable and random as the Sorting Hat which confronted Harry Potter at Hogwarts. This paper to the Journal of Hufflepuff; that one to the Journal of Slytherin!

So the venue of its publication can tell us useful things about a paper: its geographical origin, its field of study, the society that endorses it. The one thing it can’t tell us is anything useful about the quality of a paper.

Note

[1] We regret this phrasing. We asked “did the Lancet know what it was doing” in the usual colloquial sense of implying a lack of competence (“he doesn’t know what he’s doing”); but as Niall Boyce rightly pointed out, it can be read as snidely implying that The Lancet knew exactly what it was doing, and deliberately delayed the retraction in order to accumulate more citations. For avoidance of doubt, that is not what we meant; we apologise for not having written more clearly.

References

We were of course not able to give references during the debate. But since our statement included several citations, we can remedy that deficiency here.

Regular readers will remember that we followed up our 1VPC talk about what it means for a vertebra to be horizontal by writing it up as a paper, and doing it in the open. That manuscripts is now complete, and published as a preprint (Taylor and Wedel 2019).

Taylor and Wedel (2018: Figure 5). Haplocanthosaurus sp. MWC 8028, caudal vertebra ?3, in cross section, showing medial aspect of left side, cranial to the right, in three orientations. A. In “articular surfaces vertical” orientation (method 2 of this paper). The green line joins the dorsal and ventral margins of the caudal articular surface, and is oriented vertically; the red line joins the dorsal and ventral margins of the cranial articular surface, and is nearly but not exactly vertical, instead inclining slightly forwards. B. In “neural canal horizontal” orientation (method 3 of this paper). The green line joins the cranial and caudal margins of the floor of the neural canal, and is oriented horizontally; the red line joins the cranial and caudal margins of the roof of the neural canal, and is close to horizontal but inclined upwards. C. In “similarity in articulation” orientation (method 4 of this paper). Two copies of the same vertebra, held in the same orientation, are articulated optimally, then the group is rotated until the two are level. The green line connects the uppermost point of the prezygapophyseal rami of the two copies, and is horizontal; but a horizontal line could join the two copies of any point. It happens that for this vertebra methods 3 and 4 (parts B and C of this illustration) give very similar results, but this is accidental.

The preprint has all the illustrations and their captions at the back of the PDF. If you prefer to have them inline in the text, where they’re referenced — and who wouldn’t? — you can download a better version of the manuscript from the GitHub archive.

By the way, you may have noticed that what started our written in Markdown has mutated into an MS-Word document. Why? Well, because journals won’t accept submissions in Markdown. It eas a tedious and error-prone job to convert the Markdown into MS-Word, and not one I am keen to repeat. For this reason, I think I am unlikely to use Markdown again for papers.

References

  • Taylor, Michael P., and Mathew J. Wedel. 2019. What do we mean by the directions “cranial” and “caudal” on a vertebra? PeerJ PrePrints 7:e27437v2. doi:10.7287/peerj.preprints.27437v2

Hey! Do you like what we’re doing?

If you do, you might like to think about becoming a patron, making a small monthly donation to SV-POW!. We will use your money to fund research trips; if you donate $5 per month (or more), we will formally acknowledge you in papers that result from research trips that you helped to fund.

 

This awesome photo was taken in the SVPCA 2019 exhibit area by Dean Lomax (L). On the right, Jessie Atterholt, me, and Mike are checking out some Isle of Wight rebbachisaurid vertebrae prepped by Mick Green, who is juuuust visible behind Dean. Jessie’s holding a biggish (as rebbachisaurids go) dorsal or caudal centrum and partial arch, me a lovely little cervical, and Mike an astonishingly delicate and beautiful dorsal. You can see behind us more tables full of awesome fossils, and there were more still across the way, behind Dean and Mick. I was going to throw this photo into the last post to illustrate the exhibit area, but by the time the caption had hit three lines long, I realized it needed a post of its own.

Photo courtesy of Dean, and used with permission. Mark your calendars: on Sunday, Oct. 13, Dean will be speaking at TEDx Doncaster, with a talk titled, “My unorthodox path to success: how my passion for the past shaped my future”. You can follow the rest of Dean’s gradual conquest of the paleosphere through his website, http://www.deanrlomax.co.uk/.

As usual I came back from SVPCA to a mountain of un-dealt-with day-job work, which is why it’s taken me so long to get this post done and up. I wanted to get it posted as quickly as I could decently arrange, because I had a fantastic time at this year’s meeting and I wanted to document a few reasons why, both to thank this year’s hosts and to perhaps inspire the organizers of future meetings.

A shot from the back of the banquet-hall-turned-lecture-theater during Mike’s talk.

1. Space

This year’s presentation space was unlike any I can remember from previous SVPCAs. Instead of being in a lecture hall, talks were held in a big ballroom, and attendees sat in chairs at big circular banquet tables. This had a LOT of positive effects: no edging along long rows of seats to get in or out between talks, easy discussion around and between the tables at the breaks, the opportunity for a group of people to sit together as a group (vs a line or same-facing block), plenty of space to set notebooks, laptops, papers, pens, drinks, etc. I realize that meeting space is probably one of the things that conference organizers have the least control over, but at least from what I saw this year I’d say the ballroom model works even better than the lecture hall model, so that’s a possible consideration for the future.

2. Time

Owing to the smaller-than-normal number of abstract submissions — possibly a function of the meeting being on an island rather than the, uh, somewhat larger island of Great Britain — everyone who asked for a talk got one, and the talk slots were long enough for full 15-minute talks and 5 minutes for questions. So the meeting seemed decompressed. No-one really rushed through their talks (although Mike did speak very quickly), and there was usually plenty of time for questions, and the all-important coffee top-up or between-breaks bio-refresh. I know that a fuller conference is in some ways a healthier conference, and I still maintain that if talks have to be trimmed at future meetings, established players like myself should take the hit so students and early-career-researchers can have some runway, but I still appreciated the more relaxed pace of this meeting.

3. Food and drink

Food and drink service was probably the best that I have experienced at a paleo conference, full stop. I wish I had taken a photo of the ranked rows of coffee cups on saucers, because they never ran out. I don’t think we ever ran out of coffee, either. A lunch of sandwiches, crisps, veggies, and hummus (edit: and cheese, lots of beautiful cheese!) was provided on Thursday and Friday all three days of the conference, and from what I saw, the lunches ran down to a bare handful of sandwiches at the very end but didn’t quite run out — and this was after everyone had ample opportunity to go back for more. Simply an outstanding job.

If I had one quibble, it was that the bar at Cowes Yacht Haven opened about five minutes before the start of Don Henderson’s Fox Lecture on Wednesday evening, without warning and after a lot of people (Mike and me included) had brought in drinks from outside, which we were then told we couldn’t drink on the premises. I realize that the opening and closing of the yacht club bar was probably outside the control of the organizers, but it was an annoyance for those of us who wanted to have a drink with the evening lecture.

4. Exhibitors

I admit to being disappointed when I realized that the meeting would be at Cowes rather than near the Dinosaur Isle museum in Sandown. We did get to visit the museum for the Tuesday evening icebreaker, but other than that we were in a different town entirely. The organizers’ clever solution was to bring the fossils to the paleontologists: several local collectors brought fossils for us to pore over on breaks and during poster time. This was particularly great for Mike, Jessie, and me, since so many of the fossils on display were from sauropods. Jessie and I were able to recognize neural canal ridges in the vertebrae of a rebbachisaurid for the first time, and we were able to use a brachiosaur caudal to demonstrate the ridges to Femke Holwerda, who then told us she’d seen them in a cetiosaur caudal. So our research made meaningful advancements because of the specimens on display, and we made useful contacts.

Speaking of Femke, her big Patagosaurus redescription has been accepted for publication at an OA outlet, so look for that most-welcome work in the not-too-distant future.

There were also paleoartists among the exhibitors, including John Sibbick, Mark Witton, and Luis Rey, among others, including some local artists. I picked up a nice print of a hand-drawn sauropod caudal by Trudie Wilson (this Trudie Wilson, not that Trudie Wilson, although I’m sure she’s a wonderful person too), which I need to do a whole post about, and will soon. I can’t remember now who proposed it, but someone remarked in one of the open sessions about how nice it was to have so much paleoart on display, and that maybe that was something that future meetings could lean into, including having paleoartists give talks about their art. That’s not unprecedented — John Conway and Bob Nicholls have both given presentations on paleoart at previous meetings, either in regular sessions or at evening social functions — but it is a great idea, and one I heartily endorse.

5. Proximity to everything else

Mike did sterling work finding an AirBnB house for a bunch of us (Mike, Darren, Mark Evans, Femke Holwerda, Jeff Liston, Mark Witton, Georgia Witton-Maclean, and Vicki and London and me) that was 300 feet from the entrance to Cowes Yacht Haven and about 700 feet from the banquet hall where the talks were held. I don’t think I’ve ever had such a short walk between my lodgings and the talk venue, even when I’ve stayed in the hotel where the conference was being held. There was also a Sainsbury’s grocery store, a bank of ATMs, and a bunch of restaurants within, seriously, a two-minute walk of the venue. I realize that this was also a lucky circumstance, not readily repeatable for other meetings that take place in museums or university lecture halls at some remove from commercial districts, but it sure was nice. If you had ten minutes, you could legit pop out to Sainsbury’s for some crisps or a beer, and be back at your seat with time to spare.

6. Loot

This one is purely personal, and mostly outside the organizers’ control. (Although they did carelessly put those exhibitors right in the path of my wallet, which fortunately was only running at about Category 3 this trip.) I’m only listing it here to guilt me into finishing the post (or posts) about the items I acquired on the trip, but folks, I did all right. More on that later.

So, a huge thank-you to the organizers of this year’s SVPCA for pulling off such a comfortable and enjoyable meeting. It was a gem. For more on what it was like, please see this post by Emma Nicholls, Deputy Keeper of Natural History at London’s Horniman Museum. If you know of other post-SVPCA conference reviews or retrospectives, please post them in the comments.