More on the disturbing plausibility of ChatGPT
April 12, 2023
Prompted by a post on Mastodon (which, like all Mastodon posts, I can no longer find), I asked ChatGPT to tell me about my own papers. The response started out well but quickly got much worse. I will indent my comments on its response.
Q. What are some articles written by Michael P. Taylor?
A. Michael P. Taylor is a vertebrate paleontologist and open science advocate. He has written numerous articles and papers on topics such as the taxonomy and systematics of sauropod dinosaurs, open access publishing, and scholarly communication.
So far, so good. In general, LLMs seem to be good at synthesizing overviews. It’s specific facts that trip them up.
Here are a few examples of his articles:
Taylor, M. P. (2009). A re-evaluation of Brachiosaurus altithorax Riggs 1903 (Dinosauria, Sauropoda) and its generic separation from Giraffatitan brancai (Janensch 1914) (Dinosauria, Sauropoda). Journal of Vertebrate Paleontology, 29(3), 787-806.
The first one gives is pretty good. The only mistake is that it’s oddly appended “(Dinosauria, Sauropoda)” to the end of the article title. The journal, volume, issue and page-numbers are all correct.
Taylor, M. P. (2011). Sauropod dinosaur research: a historical review. In A. W. A. Kellner, & D. A. S. Silva (Eds.), Vertebrate paleontology and sedimentology of the Jurassic-Cretaceous boundary in the northern and central parts of the Lusitanian Basin, Portugal (pp. 11-43). Sociedade de Geologia de Portugal.
I did write a book-chapter titled “Sauropod dinosaur research: a historical review”. But it came out in 2010, not 2011. And it ran from pages 361–386, not 11-43. And it was in Dinosaurs and Other Extinct Saurians: a Historical Perspective, not Vertebrate paleontology and sedimentology of the Jurassic-Cretaceous boundary in the northern and central parts of the Lusitanian Basin, Portugal (which doesn’t exist). Which was edited by Richard T. J. Moody, Eric Buffetaut, Darren Naish and David M. Martill, not A. W. A. Kellner & D. A. S. Silva. And published by the Geological Society of London, not Sociedade de Geologia de Portugal.
Taylor, M. P., & Wedel, M. J. (2018). Why sauropods had long necks; and why giraffes have short necks. PeerJ Preprints, 6, e27537v1.
I and Matt did co-write a paper with a similar title in PeerJ. But it’s Why sauropods had long necks; and why giraffes have short necks, not the other way around. And it came out in 2013, not 2018. And it was in PeerJ proper, not PeerJ Preprints.
Taylor, M. P. (2020). The open-access wars: How to free science from academic paywalls. Wired.
I never wrote any such paper. There is an article out there whose HTML title is the one given here, though the visible title is completely different: “The war to free science”. It’s in Vox, not Wired; it came out in 2019, not 2020. And it’s by Brian Resnick and Julia Belluz.
Taylor, M. P., & Knope, M. L. (2021). The future of scholarly publishing is in our hands. The Conversation.
As far as I recall, I never wrote anything for The Conversation. There seems to be no article named “The future of scholarly publishing is in our hands”. And I have never met or corresponded with, far less co-authored with, anyone with the surname Knope.
All in all, a pretty horrible performance — but, and this is the scary part, one that even someone pretty familiar with my work could very easily have swallowed.
I made a very similar point a few months ago in These new “artificial intelligence” programs don’t know what they’re talking about. If the responses were obviously wrong, there wouldn’t be too much of a problem here. But this is close enough to fool most people most of the time.
As more and more of what we read is likely to be written by LLMs in the next years, I am concerned that we’re going to be dealing with an awful lot of this kind of thing. The ability to pan gold out of the stream of lies is going to become really important.
I’m sure you’ve seen things like ChatGPT in the news: programs that can carry out pretty convincing conversations. They are known as Large Language Models (LLMs) and are frequently referred to as being Artificial Intelligence (AI) — but I really don’t like that designation as it implies some understanding.
To try this out, I asked OpenAI Chat this question: “What are the contenders for the title of Biggest Dinosaur, and what is the basis for their various claims?” I picked that because I know a bit about the answer, and in fact wrote an answer myself back in 2015.
Here’s what it came up with:
There are several contenders for the title of “biggest dinosaur,” and their claims are based on different pieces of evidence. The most commonly cited contenders are:
- Argentinosaurus: This is considered to be one of the largest known dinosaurs, with estimates of its length ranging from 110-130 feet (33-40 meters) and a weight of around 100-110 tons. Its size is based on a few fragmented bones, including vertebrae and limb bones, which were found in Argentina.
- Brachiosaurus: This dinosaur is estimated to have been around 85 feet (26 meters) long and weighed around 80-100 tons. Its size is based on several well-preserved specimens that have been found in North America, including a nearly complete skeleton.
- Puertasaurus: This dinosaur is estimated to have been around 85-100 feet (26-30 meters) long and weighed around 80-100 tons. Its size is based on a few bones, including a femur, that were found in Argentina.
- Supersaurus: This dinosaur is estimated to have been around 110-125 feet (33-38 meters) long and weighed around 80-100 tons. Its size is based on a single specimen, a set of vertebrae, that was found in the US.
- Diplodocus: This dinosaur is estimated to have been around 90-115 feet (27-35 meters) long and weighed around 20-40 tons. Its size is based on several well-preserved specimens that have been found in North America.
It’s worth noting that the size of these dinosaurs is approximate and there’s still debate among paleontologists. Also, the weight of these dinosaurs is based on estimations, not real measurements.
At first glance this is a decent answer. It’s certainly better than just saying “Argentinosaurus weighed 100 tons”. But if you know your sauropods, you’ll immediately recognise that this answer is riddled with inaccuracies:
- The Argentinosaurus entry is actually pretty good, although I’ve never seen a 110 ton mass estimate and modern estimates tend to be rather lower than the specified range.
- The Brachiosaurus mass estimate is way off, two or three times as high as it should be.
- More importantly, no remotely complete Brachiosaurus specimen has ever been described, and size estimates are based primarily on its Tanzanian relative Giraffatitan.
- There is no known femur of Puertasaurus. (When I corrected the AI on this, it told me instead that there is a pubis known.)
- Supersaurus is not based on a single specimen, and both of the main specimens that have been described contain plenty of appendicular material.
- The Diplodocus length estimate is a bit inflated, but otherwise not bad. But it’s not clear what it’s doing in a list of five biggest dinosaurs.
- The answer omits some very strong contenders, including Dreadnoughtus and Patagotitan.
- It doesn’t really address the second part of my question — e.g. Supersaurus has a good claim to be longer, but not heaviest; the converse is likely true for Argentinosaurus.
Now here is the real problem: the LLM does well enough to fool people. If it was nonsense from start to end, there would be nothing to fear here, but the plausibility of the answers and the authoritative tone in which they are given lends the many mistakes a credibility that they do not deserve.
Having seen this sort-of-convincing-but-very-wrong reply in a field that I know something about, I would be very very cautious about trusting an LLM to teach me about a field I don’t already know. I’m guessing its replies about space flight, quantum physics and Medieval French literature are going to be similarly flawed (but also, worryingly, similarly convincing to those such as myself who don’t know better.)
There is a very fundamental reason for all these mistakes: as I implied above, LLMs do not understand anything. They just know what phrases occur close to other phrases. They can do amazing things with that one trick, and I can see them being useful as discovery tools. But we’ll go badly wrong when we start trusting them as anything more than a bright but ignorant kid offering suggestions.
So for all the talk of AI having taken huge leaps forward in the last couple of years, I don’t think any such thing has happened. We’ve just got much better at generating plausible text. But there’s no advance in actual understanding.
Tutorial 39: how not to conclude a talk or paper
March 19, 2021
“And in conclusion, this new fossil/analysis shows that Lineageomorpha was more [here fill in the blank]:
- diverse
- morphologically varied
- widely distributed geographically
- widely distributed stratigraphically
…than previously appreciated.”
Yes, congratulations, you’ve correctly identified that time moves forward linearly and that information accumulates. New fossils that make a group less diverse, varied, or widely distributed–now that’s a real trick.
Okay, that was snarky to the point of being mean, and here I must clarify that (1) I haven’t been to a conference in more than a year, so hopefully no-one thinks I’m picking on them, which is good, because (2) I myself have ended talks this way, so I’m really sniping at Old Matt.
And, yeah, new fossils are nice. But for new fossils or new analyses to expand what we know is expected. It’s almost the null hypothesis for science communication–if something doesn’t expand what we know, why are we talking about it? So that find X or analysis Y takes our knowledge beyond what was “previously appreciated” is good, but it’s not a particularly interesting thing to say out loud, and it’s a really weak conclusion.
(Some cases where just being new is enough: being surprisingly new, big expansions [like hypothetically finding a tyrannosaur in Argentina], and new world records.)
Don’t be Old Matt. Find at least one thing to say about your topic that is more interesting or consequential than the utterly pedestrian observation that it added information that was not “previously appreciated”. The audience already suspected that before you began, or they wouldn’t be here.
I showed this post to Mike before I published it, and he said, “What first made you want to work on this project? That’s your punchline: the thing that was cool enough that you decided to invest months of effort into it.” Yes! Don’t just tell the audience that new information exists, tell them why it is awesome.
Cool URIs don’t change
November 26, 2020
It’s now 22 years since Tim Berners-Lee, inventor of the World Wide Web, wrote the classic document Cool URIs don’t change [1]. It’s core message is simple, and the title summarises it. Once an organization brings a URI into existence, it should keep it working forever. If the document at that URI moves, then the old URI should become a redirect to the new. This really is Web 101 — absolute basics.
So imagine my irritation when I went to point a friend to Matt’s and my 2013 paper on whether neural-spine bifurcation is an ontogenetic character (spoiler: no), only to find that the paper no longer exists.

Wedel and Taylor (2013b: figure 15). An isolated cervical of cf. Diplodocus MOR 790 8-10-96-204 (A) compared to D. carnegii CM 84/94 C5 (B), C9 (C), and C12 (D), all scaled to the same centrum length. Actual centrum lengths are 280 mm, 372 mm, 525 mm, and 627 mm for A-D respectively. MOR 790 8-10-96-204 modified from Woodruff & Fowler (2012: figure 2B), reversed left to right for ease of comparison; D. carnegii vertebrae from Hatcher (1901: plate 3).
Well — it’s not quite that bad. I was able to go to the web-site’s home page, navigate to the relavant volume and issue, and find the new location of our paper. So it does still exist, and I was able to update my online list of publications accordingly.
But seriously — this is a really bad thing to do. How many other links might be out there to our paper? All of them are now broken. Every time someone out there follows a link to a PalArch paper — maybe wondering whether that journal would be a good match for their own work — they are going to run into a 404 that says “We can’t run our website properly and can’t be trusted with your work”.
“But Mike, we need to re-organise our site, and —” Ut! No. Let’s allow Sir Tim to explain:
We just reorganized our website to make it better.
Do you really feel that the old URIs cannot be kept running? If so, you chose them very badly. Think of your new ones so that you will be able to keep then running after the next redesign.
Well, we found we had to move the files…
This is one of the lamest excuses. A lot of people don’t know that servers such as Apache give you a lot of control over a flexible relationship between the URI of an object and where a file which represents it actually is in a file system. Think of the URI space as an abstract space, perfectly organized. Then, make a mapping onto whatever reality you actually use to implement it. Then, tell your server.
If you are a responsible organization, then one of the things you are responsible for is ensuring that you don’t break inbound links. If you want to reorganize, fine — but add the redirects.
And look, I’m sorry, I really don’t want to pick on PalArch, which is an important journal. Our field really needs diamond OA journals: that is, venues where vertebrate paleontology articles are free to read and also free to authors. It’s a community-run journal that is not skimming money out of academia for shareholders, and Matt’s and my experience with their editorial handling was nothing but good. I recommend them, and will proabably publish there again (despite my current irritation). But seriously, folks.
And by the way, there are much worse offenders than PalArch. Remember Aetogate, the plagiarism-and-claim-jumping scandal in New Mexico that the SVP comprehensively fudged its investigation of? The documents that the SVP Ethics Committee produced, such they were, were posted on the SVP website in early 2008, and my blog-post linked to them. By July, they had moved, and I updated my links. By July 2013, they had moved again, and I updated my links again. By October 2015 they had moved for a third time: I both updated my links, and made my own copy in case they vanished. Sure enough, by February 2019 they had gone again — either moved for a fourth time or just quietly discarded. This is atrocious stewardship by the flagship society of our discipline, and they should be heartily ashamed that in 2020, anyone who wants to know what they concluded about the Aetogate affair has to go and find their documents on a third-party blog.
Seriously, people! We need to up our game on this!
Cool URIs don’t change.
[1] Why is this about URIs instead of URLs? In the end, no reason. Technically, URIs are a broader category than URLs, and include URNs. But since no-one anywhere in the universe has ever used a URN, in practice URL and URI are synonymous; and since TBL wrote his article in 1998, “URL” has clearly won the battle for hearts and minds and “URI” has diminished and gone into the West. If you like, mentally retitle the article “Cool URLs don’t change”.
I think we’ve all had enough of the Impact Factor as a way of measuring the quality of journals. From Ginny Barbour’s forensic account of negotiating PLoS Medicine’s IF back in 2006, via Stephen Curry’s measured rant back in 2012 (“if you use impact factors you are statistically illiterate”) and Björn Brembs’ survey of how very widespread IF negotations are in 2016, to all the recent negotiations with Clarivate about which journals should even have IFs, it’s become increasingly obvious that the Impact Factor is not a metric, it’s a negotiation.
And of course this means that the reason any journal has the particular IF it has is competely opaque.
The world needs a much more transparent metric of journal quality, and I am here to offer it! The Objective Quality Factor (QOF) is assigned in a wholly straightforward way that anyone can understand:
Your journal obtains an OQF of x by paying me x pounds.
That’s it. As soon as I acknowledge your payment, you have the right to display your OQF on the journal home page and in marketing materials.
If another journal in your field obtains a higher OQF than yours, and you need to regain your journal’s position at the top of the totem pole, all you need do is send me more money.
Payments via PayPal to ebay@miketaylor.org.uk please!
Why do people publish in Scientific Reports?
April 25, 2020
In the last post, I catalogued some of the reasons why Scientific Reports, in its cargo-cult attempts to ape print journals such as its stablemate Nature, is an objectively bad journal that removes value from the papers submitted to it: the unnatural shortening that relagates important material into supplementary information, the downplaying of methods, the tiny figures that ram unrelated illustrations into compound images, the pointless abbreviating of author names and journal titles.
This is particularly odd when you consider the prices of the obvious alternative megajournals:
- PeerJ: $1,195 APC (or free if authors are members)
- PLOS ONE: $1,695 APC
- Scientific Reports: $1,870 APC
So to have your paper published in Scientific Reports costs 10% more than in PLOS ONE, or 56% more than in PeerJ; and results in an objectively worse product that slices the paper up and dumps chunks of it in the back lot, compresses and combines the illustrations, and messes up the narrative.
So why would anyone choose to publish in it?
Well, the answer is depressingly obvious. As a colleague once expressed it to me “until I have a more stable job I’ll need the highest IFs I can pull off to secure a position somewhere“.
It’s as simple as that. PeerJ‘s impact factor at the time of writing is 2.353; PLOS ONE‘s is 2.776; That of Scientic Reports is 4.525. And so, it in the idiotic world we live in, it’s better for an author’s career to pay more for a worse version of his article in Scientific Reports than it is to pay less for a better version in PeerJ or PLOS ONE. Because it looks better to have got into Scientific Reports.
BUT WAIT A MINUTE. These three journals are all “megajournals”. They all have the exact same editorial criteria, which is that they accept any paper that is scientifically sound. They make no judgement about novelty, perceived importance or likely significance of the work. They are all completely up front about this. It’s how they work.
In other words, “getting into” Scientific Reports instead of PeerJ says absolutely nothing about the quality of your work, only that you paid a bigger APC.
Can we agree it’s insane that our system rewards researchers for paying a bigger APC to get a less scientifically useful version of their work?
Let me say in closing that I intend absolutely no criticism of Daniel Vidal or his co-authors for placing their Spinophorosaurus posture paper in Scientific Reports. He is playing the ball where it lies. We live, apparently, in a world where spending an extra $675 and accepting a scientifically worse result is good for your career. I can’t criticise Daniel for doing what it takes to get on in that world.
The situation is in every respect analogous to the following: before you attend a job interview, you are told by a respected senior colleague that your chances of getting the post are higher if you are wearing designer clothing. So you take $675 and buy a super-expensive shirt with a prominent label. If you get the job, you’ll consider it as bargain.
But you will never have much respect for the search committee that judged you on such idiotic criteria.
Scientific Reports is an objectively bad journal
April 25, 2020
As I was figuring out what I thought about the new paper on sauropod posture (Vidal et al. 2020) I found the paper uncommonly difficult to parse. And I quickly came to realise that this was not due to any failure on the authors’ part, but on the journal it was published in: Nature’s Scientific Reports.
A catalogue of pointless whining
A big part of the problem is that the journal inexplicably insists on moving important parts of the manuscript out of the main paper and into supplementary information. So for example, as I read the paper, I didn’t really know what Vidal et al. meant by describing a sacrum as wedged: did it mean non-parallel anterior and posterior articular surfaces, or just that those surfaces are not at right angles to the long axis of the sacrum? It turns out to be the former, but I only found that out by reading the supplementary information:
The term describes marked trapezoidal shape in the
centrum of a platycoelous vertebrae in lateral view or in the rims of a condyle-cotyle (procoelous or opisthocoelous) centrum type.
This crucial information is nowhere in the paper itself: you could read the whole thing and not understand what the core point of the paper is due to not understanding the key piece of terminology.
And the relegation of important material to second-class, unformatted, maybe un-reviewed supplementary information doesn’t end there, by a long way. The SI includes crucial information, and a lot of it:
- A terminology section of which “wedged vertebrae” is just one of ten sub-sections, including a crucial discussion of different interpretation of what ONP means.
- All the information about the actual specimens the work is based on.
- All the meat of the methods, including how the specimens were digitized, retro-deformed and digitally separated.
- How the missing forelimbs, so important to the posture, were interpreted.
- How the virtual skeleton was assembled.
- How the range of motion of the neck was assessed.
- Comparisons of the sacra of different sauropods.
And lots more. All this stuff is essential to properly understanding the work that was done and the conclusions that were reached.
And there’s more: as well as the supplementary information, which contains six supplementary figures and three supplementary tables, there is an additonal supplementary supplementary table, which could quite reasonably have gone into the supplementary information.
In a similar vein, even within the highly compressed actual paper, the Materials and Methods are hidden away at the back, after the Results, Discussion and Conclusion — as though they are something to be ashamed of; or, at best, an unwelcome necessity that can’t quite be omitted altogether, but need not be on display.
Then we have the disappointingly small illustrations: even the “full size” version of the crucial Figure 1 (which contains both the full skeleton and callout illustrations of key bones) is only 1000×871 pixels. (That’s why the illustration of the sacrum that I pulled out of the paper for the previous post was so inadequate.)
Compare that with, for example, the 3750×3098 Figure 1 of my own recent Xenoposeidon paper in PeerJ (Taylor 2018) — that has more than thirteen times as much visual information. And the thing is, you can bet that Vidal et al. submitted their illustration in much higher resolution than 1000×871. The journal scaled it down to that size. In 2020. That’s just crazy.
And to make things even worse, unrelated images are shoved into multi-part illustrations. Consider the ridiculousness of figure 2:

Vidal et al. (2020: figure 2). The verticalization of sauropod feeding envelopes. (A) Increased neck range of motion in Spinophorosaurus in the dorso-ventral plane, with the first dorsal vertebra as the vertex and 0° marking the ground. Poses shown: (1) maximum dorsiflexion; (2) highest vertical reach of the head (7.16 m from the ground), with the neck 90° deflected; (3) alert pose sensu Taylor Wedel and Naish13; (4) osteological neutral pose sensu Stevens14; (5) lowest vertical reach of the head (0.72 m from the ground at 0°), with the head as close to the ground without flexing the appendicular elements; (6) maximum ventriflexion. Blue indicates the arc described between maximum and minimum head heights. Grey indicates the arc described between maximum dorsiflexion and ventriflexion. (B) Bivariant plot comparing femur/humerus proportion with sacrum angle. The proportion of humerus and femur are compared as a ratio of femur maximum length/humerus maximum length. Sacrum angle measures the angle the presacral vertebral series are deflected from the caudal series by sacrum geometry in osteologically neutral pose. Measurements and taxa on Table 1. Scale = 1000 mm.
It’s perfectly clear that parts A and B of this figure have nothing to do with each other. It would be far more sensible for them to appear as two separate figures — which would allow part B enough space to convey its point much more clearly. (And would save us from a disconcertingly inflated caption).
And there are other, less important irritants. Authors’ given names not divulged, only initials. I happen to know that D. Vidal is Daniel, and that J. L. Sanz is José Luis Sanz; but I have no idea what the P in P. Mocho, the A in A. Aberasturi or the F in F. Ortega stand for. Journal names in the bibliography are abbreviated, in confusing and sometimes ludicrous ways: is there really any point in abbreviating Palaeogeography Palaeoclimatology Palaeoecology to Palaeogeogr. Palaeoclimatol. Palaeoecol?
The common theme
All of these problems — the unnatural shortening that relagates important material into supplementary information, the downplaying of methods, the tiny figures that ram unrelated illustrations into compound images, even the abbreviating of author names and journal titles — have this in common: that they are aping how Science ‘n’ Nature appear in print.
They present a sort of cargo cult: a superstitious belief that extreme space pressures (such as print journals legitimately wrestle with) are somehow an indicator of quality. The assumption that copying the form of prestigious journals will mean that the content is equally revered.
And this is simply idiotic. Scientific Reports is an open-access web-only journal that has no print edition. It has no rational reason to compress space like a print journal does. In omitting the “aniel” from “Daniel Vidal” it is saving nothing. All it’s doing is landing itself with the limitations of print journals in exchange for nothing. Nothing at all.
Why does this matter?
This squeezing of a web-based journal into a print-sized pot matters because it’s apparent that a tremendous amount of brainwork has gone into Vidal et al.’s research; but much of that is obscured by the glam-chasing presentation of Scientific Reports. It reduces a Pinter play to a soap-opera episode. The work deserved better; and so do readers.
References
- Taylor, Michael P. 2018. Xenoposeidon is the earliest known rebbachisaurid sauropod dinosaur. PeerJ 6:e5212. doi: 10.7717/peerj.5212
- Vidal, Daniel, P Mocho, A. Aberasturi, J. L. Sanz and F. Ortega. 2020. High browsing skeletal adaptations in Spinophorosaurus reveal an evolutionary innovation in sauropod dinosaurs. Scientific Reports 10(6638). Indispensible supplementary information at https://static-content.springer.com/esm/art%3A10.1038%2Fs41598-020-63439-0/MediaObjects/41598_2020_63439_MOESM1_ESM.pdf
doi:10.1038/s41598-020-63439-0
Diverticulum, diverticula
November 4, 2018
This is not ‘Nam. This is Latin. There are rules.
The term for a small growth off an organ or body is diverticulum, singular, or diverticula, plural. There are no diverticulae or God forbid diverticuli, no matter what you might read in some papers. Diverticuli is a word – it’s the genitive form of diverticulum. But I’ve never seen it used that way in an anatomy or paleo paper. Diverticuli and diverticulae as alt-plurals for diverticulum are abominations that must be stomped out with extreme prejudice. If you want to get cute with alternative spellings, Wiktionary says you can use deverticulum. Wiktionary does not warn you that you will be mocked for doing so, but it is true nonetheless.
Stop jacking up straightforward anatomical terms, authors who should know better.
Here’s a swan. Unlike diverticuli and diverticulae, this unlikely morphology is real.
It’s common to come across abstracts like this one, from an interesting paper on how a paper’s revision history influences how often it gets cited (Rigby, Cox and Julian 2018):
Journal peer review lies at the heart of academic quality control. This article explores the journal peer review process and seeks to examine how the reviewing process might itself contribute to papers, leading them to be more highly cited and to achieve greater recognition. Our work builds on previous observations and views expressed in the literature about (a) the role of actors involved in the research and publication process that suggest that peer review is inherent in the research process and (b) on the contribution reviewers themselves might make to the content and increased citation of papers. Using data from the journal peer review process of a single journal in the Social Sciences field (Business, Management and Accounting), we examine the effects of peer review on papers submitted to that journal including the effect upon citation, a novel step in the study of the outcome of peer review. Our detailed analysis suggests, contrary to initial assumptions, that it is not the time taken to revise papers but the actual number of revisions that leads to greater recognition for papers in terms of citation impact. Our study provides evidence, albeit limited to the case of a single journal, that the peer review process may constitute a form of knowledge production and is not the simple correction of errors contained in submitted papers.
This tells us that a larger number of revisions leads to (or at least is correlated with) an increased citation-count. Interesting!
Immediately, I have two questions, and I bet you do, too:
1. What is the size of the effect?
2. How robust is it?
If their evidence says that each additional round of peer-review yields an dozen additional citations, I might be prepared to revise my growing conviction that multiple rounds of peer review are essentially a waste of time. If it says that each round yields 0.0001 additional citations, I won’t. And if the effect is statistically insignificant, I’ll ignore it completely.
But the abstract doesn’t tell me those simple and fundamental facts, which means the abstract is essentially useless. Unless the authors’ goal for the abstract was for it to be an advertisement for the paper — but that’s not what an abstract is for.
In the old days, authors didn’t write abstracts for their own papers. These were provided after the event — sometimes after publication — by third parties, as a service for those who did not have time to read the whole paper but were interested in its findings. The goal of an abstract is to act as a summary of the paper, a surrogate that a reader can absorb instead of the whole paper, and which summarises the main findings. (I find it interesting that in some fields, the term “précis” or “synopsis” is used: both are more explicit.)
Please, let’s all recognise the painful truth that most people who read abstracts of our papers will not go on to read the full manuscripts. Let’s write our abstracts for those short-on-time people, so they go away with a clear and correct understanding of what our findings were and how strongly they are supported.
References