The R2R debate, part 5: what I actually think
May 1, 2020
I’ve written four posts about the R2R debate on the proposition “the venue of its publication tells us nothing useful about the quality of a paper”:
- part 1: opening statement in support
- part 2: opening statement against the motion
- part 3: my response for the motion
- part 4: the video!
A debate of this kind is partly intended to persuade and inform, but is primarily entertainment — and so it’s necessary to stick to the position you’ve been assigned. But I don’t mind admitting, once the votes have been counted, that the statement goes a bit further than I would go in real life.
It took me a while to figure out exactly what I did think about the proposition, and the process of the debate was helpful in getting me the point where I felt able to articulate it clearly. Here is where I landed shortly after the debate:
The venue of its publication can tell us something useful about a paper’s quality; but the quality of publication venues is not correlated with their prestige (or Impact Factor).
I’m fairly happy with this formulation: and in fact, on revisiting my speech in support of the original proposition, it’s apparent that I was really speaking in support of this modified version. I make no secret of the fact that I think some journals are objectively better than others; but that those with higher impact factors are often worse, not better.
What are the things that make a journal good? Here are a few:
- Coherent narrative order, with methods preceding results.
- All relevant information in one place, not split between a main document and a supplement.
- Explicit methods.
- Large, clear illustrations that can be downloaded at full resolution as prepared by the authors.
- All data available, including specimen photos, 3D models, etc.
- Open peer review: availability of the full history of submissions, reviews, editorial responses, rebuttal letters, etc.
- Well designed experiment capable of replication.
- Honesty (i.e. no fabicated or cherry-picked) data.
- Sample sizes big enough to show real statistical effect.
- Realistic assessment of the significance of the work.
And the more I look at such lists, the more I realise that that these quality indicators appear less often in “prestige” venues such as Science, Nature and Cell than they do in good, honest, working journals like PeerJ, Acta Palaeontologica Polonica or even our old friend the Journal of Vertebrate Paleontology. (Note: I am aware that the replication and statistical power criteria listed above generally don’t apply directly to vertebrate palaeontology papers.)
So where are we left?
I think — and I admit that I find this surprising — the upshot is this:
The venue of its publication can tell us something useful about a paper’s quality; but the quality of publication venues is inversely correlated with their prestige (or Impact Factor).
I honestly didn’t see that coming.
May 1, 2020 at 9:11 pm
Open peer review? How does that help? Full history of editorial responses, rebuttals letters, submissions, reviews and such. Is that for the author or public? Isn’t it more important to launch into a paper AFTER publication? Or R U suggesting that this is what should B included in the actual published paper? That would certainly make papers much more interesting to read !
The rest of your bulletted points I would agree with.
May 1, 2020 at 11:52 pm
Well that was a fairly obvious corollary in hindsight. “Because we are the top journal everything we did historically must have been right” and “They are the top journal so we must emulate all of their mannerisms to compete”.
May 2, 2020 at 10:47 am
Hi, Dale. Peer-review history is for anyone who finds it useful. It can be used to verify that real peer-review took place, rather than a rubber-stamping; it can show the history of paper; it can provide a permanent record of parts of the work that were done and written up but excised from the published version; it can be (and is) used pedagogically to teach students how review is done; it provides a means for good reviewers to receive the credit they deserve, and for obstructive reviewers to be seen for what they are; and by doing so, it provides a disincentive for obstructive reviewers to let their worst tendencies control them. There are no doubt many other reasons why it’s a good thing to have, but those are the ones that leap immediately to mind.
May 3, 2020 at 6:34 pm
I detect that you have a need to “shame” some reviewers. Well …. good luck with that. A little like trying to change a narcissist..
May 3, 2020 at 8:14 pm
Let me put it this way, Dale: I have on occasions (thankfully rare ones) had reviews of my own papers that I don’t believe would have been written had the reviewers known their reviews were going to be made public.
May 5, 2020 at 3:03 pm
Impact Factor is an index, a measurement. It clearly depends on correlations and assumptions to mean something else in abstract terms, such as importance, quality or success. I wouldn’t say that is valid for every case or equally powerful (or lineal) for the whole range, but I may agree with your newly found rule in the statistical sense (even when I presume that the original, “not correlated”, may fit better).
Prestige, however, IS an abstract idea, a value. It is one in a network of concepts by which we build our… moral guidelines? system of values? perspective? worldview? Anyway, it is an idea you may reach/build/share through assumptions, correlations, indicators. Is a red expensive sport car an indicator of “success”? Is being President or Prime Minister an indicator of “qualification”? Is IF a measurement of “correctness”, “reliability” or “importance”? Is number of readers an indicator of “prestige”?
I may question the appropriate denominator, the underlying journal database or prefer a 10-years to the 2-years version, but the IF™ it is what it is. On the contrary, your “prestige” is not my “prestige” and although we may agree on what it means as a concept it will take a big effort to discover (and agree on) the miriad aspects that we use to build it, measure it or connect it with other, similar ideas and values. Not to mention the hard task of trying to convince each other of the “true prestige”, or to agree on a different one than that our tribes/societies/cultures share and impose on us.
So “quality” cannot be inversely correlated with “prestige”. It is circular. It is one and the same, at least in my network of ideas/values. I guess it is the same for you too.
What you did, and I applaud that, is digging into what “quality” of a scientific communication (or journal, by extension) means to you and distill into an explicit list of all the concepts/patterns/attitudes that indicate it. I may add several points, I think. And I am sure you are also prepared to defend with arguments each of those, and the list itself, with the intention (if I understand correctly) to convince others that “prestige of a journal” should be associated with THAT “quality” and not to OTHER(mainstream) indicators of “quality” (e.g., bigger IF, number of twits, bigger APCs, annual revenue of the supporting investment fund, number of people who will never be able to publish there, etc.).
Guess what: I agree. And we are certainly not alone.
But is not a simple correlation what we should aim for. You may be accepting the weapon your enemy proposed. I would prefer:
`The venue of its publication can tell us something useful about a paper’s quality; but we should discuss what the quality of a publication venue is and what we need it for (spoiler: it has nothing to do with Impact Factors; check my list).`
[I hope you understand what I meant and forgive the amount of words/time I needed; I feel limited to express concise subtleties in English]