An abstract should be a surrogate for a paper, not an advertisement for it
May 13, 2018
It’s common to come across abstracts like this one, from an interesting paper on how a paper’s revision history influences how often it gets cited (Rigby, Cox and Julian 2018):
Journal peer review lies at the heart of academic quality control. This article explores the journal peer review process and seeks to examine how the reviewing process might itself contribute to papers, leading them to be more highly cited and to achieve greater recognition. Our work builds on previous observations and views expressed in the literature about (a) the role of actors involved in the research and publication process that suggest that peer review is inherent in the research process and (b) on the contribution reviewers themselves might make to the content and increased citation of papers. Using data from the journal peer review process of a single journal in the Social Sciences field (Business, Management and Accounting), we examine the effects of peer review on papers submitted to that journal including the effect upon citation, a novel step in the study of the outcome of peer review. Our detailed analysis suggests, contrary to initial assumptions, that it is not the time taken to revise papers but the actual number of revisions that leads to greater recognition for papers in terms of citation impact. Our study provides evidence, albeit limited to the case of a single journal, that the peer review process may constitute a form of knowledge production and is not the simple correction of errors contained in submitted papers.
This tells us that a larger number of revisions leads to (or at least is correlated with) an increased citation-count. Interesting!
Immediately, I have two questions, and I bet you do, too:
1. What is the size of the effect?
2. How robust is it?
If their evidence says that each additional round of peer-review yields an dozen additional citations, I might be prepared to revise my growing conviction that multiple rounds of peer review are essentially a waste of time. If it says that each round yields 0.0001 additional citations, I won’t. And if the effect is statistically insignificant, I’ll ignore it completely.
But the abstract doesn’t tell me those simple and fundamental facts, which means the abstract is essentially useless. Unless the authors’ goal for the abstract was for it to be an advertisement for the paper — but that’s not what an abstract is for.
In the old days, authors didn’t write abstracts for their own papers. These were provided after the event — sometimes after publication — by third parties, as a service for those who did not have time to read the whole paper but were interested in its findings. The goal of an abstract is to act as a summary of the paper, a surrogate that a reader can absorb instead of the whole paper, and which summarises the main findings. (I find it interesting that in some fields, the term “précis” or “synopsis” is used: both are more explicit.)
Please, let’s all recognise the painful truth that most people who read abstracts of our papers will not go on to read the full manuscripts. Let’s write our abstracts for those short-on-time people, so they go away with a clear and correct understanding of what our findings were and how strongly they are supported.
References
May 14, 2018 at 2:14 pm
If indeed reading an abstract is often used as a valid substitute for evaluating the experimental evidence, then I think it’s actually better to keep abstracts a bit vague and “force” people to dig through the data and their interpretations. If someone doesn’t want to make the effort, then maybe they should not be left with a clear-cut message that’ll be acritically preserved and maybe passed on in discussions, etc. It’s not a matter of paternalism, but of the potentially “polluting” effect that acritical acceptance of claims has on one’s field.
Trying to give an idea of how strongly supported our conclusions are sounds good, but I don’t see how that may practically work, in an abstract.
Regarding the first part, I guess the potential* impact of revision rounds may really depend on the field and thus on what a reviewer may be able to ask. A quick experiment may allow a detail to be worked out, improving the whole contribution at small cost for the authors. And it may be something that the authors didn’t think of.
*Practically, often reviewers miss the point and end up asking things that are either not relevant or not relevant AND painful to answer.
May 14, 2018 at 2:34 pm
The first rule of scholarship (and indeed life) is that you can’t force people to do the things that would be convenient for you. Every one of us runs across dozens of papers every week that we might be interested. There is simply no way we will read them all. (In many cases, we can’t read them, because they are behind paywalls.)
With my author hat on, of course I believe that each of my papers is a special snowflake that everyone should take a few hours to admire in detail. But we have to recognise that it just ain’t gonna happen. 90% of the people who read our papers will go away with an impression of the work formed entirely by the abstract (and maybe a few of the illustrations). That’s the reality, and we need to calibrate our expectations and our choices accordingly.
May 14, 2018 at 4:53 pm
I placed “force” in quotes exactly because I know there´s no point in (or way of) forcing anything onto anyone and I surely do not mean to that to be of convenience to _me_. If anything, making bold, poorly supported claims that exploit the “read only the abstract” attitude would profit authors (among which I count myself, hat-on), at least in the short term. I just think it´s something to discourage rather than flatly accept. You did suggest that support to the claims should be somehow conveyed in the abstract, so I guess you do see the problem in having people uncritically buy claims.
Between just looking at the abstract and skimming through “maybe a few of the illustrations” there´s a potentially huge difference. Of course, if pics are non-representative then data stats can be forged too, but if you assume the figures/graphs do reflect what´s being argued, and if you´re familiar with the experimental approaches, techniques etc, then you can already come away with an idea about the overall reliability of the claims being made in the abstract.
This said, I also count myself among the people who come across vastly more papers than can reasonably be read. I try to store the relevant bits (typically the question being asked and the approach employed, rather than the conclusions) “mentally” for when the issue pops up that may need me to actually dig into the subject more carefully. But for that I don´t think I need a clearly defined conclusion or some bold statement. Besides, I guess one could argue that the bolder a statement, the more likely it is to turn up being bullshit, which clearly would enrich your literature recollections for garbage.
May 14, 2018 at 8:00 pm
It sounds like you’re concerned by the problem of deliberately misleading, or at least over-inflated abstracts. I admit that could be an issue, but in my experience it’s hugely overshadowed by the much bigger problem of downright uninformative abstracts,
I think we should all (me very much included!) get into the habit of coming back to our abstracts 24 hours after writing them, and try to read them as if knowing nothing about the project. Then ask: what do I want to know about the project that the abstract could have told me?
May 14, 2018 at 8:12 pm
You guys really need to write a book on this kind of stuff (“Healthy Habits for the Aspiring Researcher”) or somesuch. Whenever you guys talk about good habits in paleontology, they are always things that (at the very least) I have found useful in writing papers (e.g., if naming a taxon, always put the name in the title) and in many cases wish I had known about when I was starting out. Especially given that these habits are things people (including advisors and other colleagues) generally don’t talk about. These posts on good habits may all be freely available via SV-POW but at the same time having them all together in one place would be really useful.
As to the topic of the post itself (and the discussion above), I think in a lot of cases it is true that people will often only read the abstract and base their conclusions off of that. One thing that I noticed talking to colleagues outside academia is it’s often hard to imagine what it’s like to do research without interlibrary loan and other such wonderful things. If you don’t have university access you have to look at each individual paper and decide whether it’s worth your time to read the whole thing. And because your resources aren’t unlimited, it’s highly likely that you will have to take what you can from the abstract.
As a related observation, think of conference talks and posters. These are works that, almost by definition, ONLY exist as abstracts unless the author decides to post the original slideshow or poster online. If the author doesn’t publish a full study until years later the abstract is essentially the only information one has. Many times I have had to get information off of abstracts for specimens or studies that were either never published or are probably still being worked on.
Of course, what one considers useful and non-useful information for an abstract might vary depending on the preferences of a reader/author.
May 14, 2018 at 8:27 pm
Thanks, Anonymous, I (and I’m sure Matt) really appreciate these comments. A lot of our bossy-boots instructions are collected on the Tutorials page, but yes, something like a book chapter would be a good way to consolidate it all.
May 14, 2018 at 8:38 pm
On a related note, what do you guys think about the number of “big picture” conclusions that should be made in a single paper? In my experience, I have noticed that when I include more than one major “take home” observation in a paper, I find that no one notices them, and as a result they don’t get cited when later researchers talk about the subject. Tying this back to the abstract issue, say you have four major conclusions in a paper (e.g., “this is a new species”, “paleobiology of said species”, “environment of site based on info from new species”, “evolutionary history of the group”). Trying to fit all those in an abstract means sacrificing space that could be used to clarify other conclusions. The same is true of the paper itself. I’ve noticed that if a paper is too long and makes too many observations people just tend to tune out.
On the other hand, these days it seems like it is almost impossible to get a paper published in paleontology unless it’s (A) describing a new taxon, or (B) a statistical analysis. Gone seem to be the days where one could posit an argument based on observational evidence of morphology or evolutionary patterns that didn’t involve a statistical analysis. This isn’t to say don’t provide numerical data (Measure Your Damn Dinosaur is still in effect), but studies in which the available data just don’t lend themselves to statistical analysis.
Case in point, I had a paper that in addition to describing new material noted a number of odd morphological features in a species which together made what (to me) seemed like a fairly good argument the species was doing something different from its close relatives paleobiologically speaking. The argument was built off the data, but it told a “story” of its own that probably could have worked as its own paper. Problem was all of the data were observational. I could provide measurements saying (Feature X is 20% larger/smaller/etc. in this taxon than in all others), but it wasn’t possible to statistically test it because of the small number of specimens available. I folded it in with the main study because I felt that I couldn’t get such a paper published on its own as it didn’t involve any major numerical data analyses (e.g., morphospace area, phylogenetic signal, etc.). Probably the closest thing the people at SV-POW have done to this in terms of situation is the neck combat hypothesis in apatosaurines. In that the conclusion wasn’t necessarily made based on numerical data (though you could quantify that apatosaurine vertebrae are more robust than other diplodocids), but on comparative anatomy with other species (which is where my arguments were mostly drawn).
So the question is do you split up each major “take home” conclusion to avoid overloading the reader, or do you put them together because otherwise the paper and the information can’t get published (even if the conclusions can stand on their own)?
May 14, 2018 at 9:03 pm
I have a whole bucketload of agreement here. My experience — and Matt’s, I think, though he’ll no doubt chip in — is that we’ve written some pretty long thinking-through-the-subject papers (e.g. Taylor and Wedel 2013 on sauropod neck anatomy, Wedel and Taylor 2013 on sauropod neural spine bifurcation), and neither of them seems to have landed as solidly as we might have wished — perhaps because they are papers with no single sound-bitey conclusion, and lots of ideas. The latter in particular I hoped would be widely cited for its Table 1 that shows how wildly different the sequence of osteological fusions runs in different sauropods, but it seems to have got understandably lost in the 34 pages, and had to be expanded into its own paper (Hone, Farke and Wedel 2016).
I am also in sympathy with your statement, “Gone seem to be the days where one could posit an argument based on observational evidence of morphology or evolutionary patterns that didn’t involve a statistical analysis.” This is what I trying — very, very clumsily — to say in this retracted post. I welcome the trend towards rigour in supporting hypotheses, but at the same time I lament the loss of ideas papers.
There’s an irony here in that my most successful paper (Taylor, Wedel and Naish 2009, which has just passed 100 citations according to Google Scholar) was exactly that kind of paper: ideas, thoughts, comparisons and inferences, but no numbers. There’s evidently an appetite to read and cite such papers, but maybe not to publish them.
Did you get your new-material-and-a-lifestyle-hypothesis paper successfully published?
Anyway, putting it all together, I’m not sure what the best approach is going forwards. Putting any paper through peer-review involves a high fixed cost in time, energy and will to live, independent of the length of that paper, so writing and publishing four eight-pagers that make a single point each is much more work than publishing a single thirty-pager that includes all that material — maybe two or three times as much work in total. I have to admit I find that prospect daunting.
May 14, 2018 at 9:12 pm
Indeed, that’s my concern (or something more than that..). In my field (developmental neurobiology) and in general in those areas of the life sciences that rely heavily on (underpowered) experimental testing of “suitable” hypotheses built on simple views of complex processes, one is often faced with small datasets, that seem to have been squeezed into providing a precise answer (regardless of the suitability of the experiment and data for the purpose). The typical solution is often some form of vaguely supported but reasonable claim that many people will take because it fits their own view of how things are likely to work (so you may stand proven after all….if the data/discussion would anyway convince most people, why not focus on the abstract? :).
I’m not saying it’s all crap out here, but maybe encouraging critical digging into data may do everybody a favor as we’d be competing in less hype-driven fields. And I guess one form of encouraging people is to come up with abstracts that invite further reading into the paper.
May 14, 2018 at 9:21 pm
BTW., I have no problem with small or otherwise restricted data-sets, provided that people are up front about what’s what. In the case of the study whose abstract I cited for this post, they make it clear that their data-set is from a single journal (Business, Management and Accounting) — which is good. But they don’t say what their sample size was (bad!) or of course what the result was or how strongly it was supported. For the record, they used a sample of 598 papers, which is the complete set submitted to that journal from 2010-2015 — but would it have killed them to say that?
May 14, 2018 at 9:43 pm
“Did you get your new-material-and-a-lifestyle-hypothesis paper successfully published?”
Yes, but unfortunately it ended up being about a page buried within a 35 page paper. Which ended up in a journal which, while the editors were exceedingly kind and helpful, still had a $50 paywall per article. So, in retrospect, people are about as likely to read the article as they are to find the soul of Koschei the Deathless. We went with that journal specifically because the journal had no length requirements and the manuscript was very long because it touched on so many different areas (paleobiology, paleoecology, phylogeny, extinction of the clade, broader trends in evolutionary history), largely because we felt we couldn’t get the ideas published in a regular manuscript.
And based on the reviews we got when we published…I’m kind of inclined to believe that was true. If the description of the new material wasn’t included in the paper, I’m almost certain it would have never gotten through.
The paper has been cited twice since it was published, both of which were self-citations, despite papers coming out since then that talk directly about the topics touched on therein. Not even any “author’s conclusion is bad and they should feel bad”, which is kind of surprising.
I agree in welcoming the increase in rigor in scientific papers, but at the same time lament the loss of papers whose conclusions are drawn from comparative anatomy, observation of “huh, that’s weird” evolutionary trends, and other things that it’s not necessarily possible to put a p-value to. I remember recently one colleague defended a thesis involving ecosystem structure and one of the questions posed to them was “Why didn’t you test for phylogenetic signal? If I tried to do a similar analysis and not test for phylogenetic signal I couldn’t get the paper published.” The data in question wasn’t even in a form where you could test if phylogenetic signal was present.
And at the same time, if you can test your hypothesis through some statistical method, this isn’t saying don’t. If you can test it and get a significant answer, then that supports your conclusion.
I suppose it depends on the publishing environment. If you have a good publishing environment it’s better to publish lots of small papers that are easier for people to find, read, and digest. But if you have one where people are more likely to be resistant to any suggestion you put forward because of inter-researcher politics perhaps it’s better to put stuff together. There’s probably some analogy with K and r selection to be made but I’m not the one to make it.
May 16, 2018 at 4:38 pm
Two likely outcomes:
1) TL;DR
2) People will dig, but not deep enough, and will misunderstand the paper drastically.
May 17, 2018 at 4:12 pm
(I’ve often thought of labelling my abstract as TL;DR instead of Abtract.)