Today marks the one-month anniversary of my and Matt’s paper in Qeios about why vertebral pneumaticity in sauropods is so variable. (Taylor and Wedel 2021). We were intrigued to publish on this new platform that supports post-publication peer-review, partly just to see what happened.

Taylor and Wedel (2021: figure 3). Brontosaurus excelsus holotype YPM 1980, caudal vertebrae 7 and 8 in right lateral view. Caudal 7, like most of the sequence, has a single vascular foramen on the right side of its centrum, but caudal 8 has two; others, including caudal 1, have none.

So what has happened? Well, as I write this, the paper has been viewed 842 times, downloaded a healthy 739 times, and acquired an altmetric score 21, based rather incestuously on two SV-POW! blog-posts, 14 tweets and a single Mendeley reader.

What hasn’t happened is even a single comment on the paper. Nothing that could be remotely construed as a post-publication peer-review. And therefore no progress towards our being able to count this as a peer-reviewed publication rather than a preprint — which is how I am currently classifying it in my publications list.

This, despite our having actively solicited reviews both here on SV-POW!, in the original blog-post, and in a Facebook post by Matt. (Ironically, the former got seven comments and the latter got 20, but the actual paper none.)

I’m not here to complain; I’m here to try to understand.

On one level, of course, this is easy to understand: writing a more-than-trivial comment on a scholarly article is work, and it garners very little of the kind of credit academics care about. Reputation on the Qeios site is nice, in a that-and-two-bucks-will-buy-me-a-coffee kind of way, but it’s not going to make a difference to people’s CVs when they apply for jobs and grants — not even in the way that “Reviewed for JVP” might. I completely understand why already overworked researchers don’t elect to invest a significant chunk of time in voluntarily writing a reasoned critique of someone else’s work when they could be putting that time into their own projects. It’s why so very few PLOS articles have comments.

On the other hand, isn’t this what we always do when we write a solicited peer-review for a regular journal?

So as I grope my way through this half-understood brave new world that we’re creating together, I am starting to come to the conclusion that — with some delightful exceptions — peer-review is generally only going to happen when it’s explicitly solicited by a handling editor, or someone with an analogous role. No-one’s to blame for this: it’s just reality that people need a degree of moral coercion to devote that kind of effort to other people’s project. (I’m the same; I’ve left almost no comments on PLOS articles.)

Am I right? Am I unduly pessimistic? Is there some other reason why this paper is not attracting comments when the Barosaurus preprint did? Teach me.

References

 

We’ve noted many times over the years how inconsistent pneumatic features are in sauropod vertebra. Fossae and formamina vary between individuals of the same species, and along the spinal column, and even between the sides of individual vertebrae. Here’s an example that we touched on in Wedel and Taylor (2013), but which is seen in all its glory here:

Taylor and Wedel (2021: Figure 5). Giraffatitan brancai tail MB.R.5000, part of the mounted skeleton at the Museum für Naturkunde Berlin. Caudal vertebrae 24–26 in left lateral view. While caudal 26 has no pneumatic features, caudal 25 has two distinct pneumatic fossae, likely excavated around two distinct vascular foramina carrying an artery and a vein. Caudal 24 is more shallowly excavated than 25, but may also exhibit two separate fossae.

But bone is usually the least variable material in the vertebrate body. Muscles vary more, nerves more again, and blood vessels most of all. So why are the vertebrae of sauropods so much more variable than other bones?

Our new paper, published today (Taylor and Wedel 2021) proposes an answer! Please read it for the details, but here’s the summary:

  • Early in ontogenly, the blood supply to vertebrae comes from arteries that initially served the spinal cord, penetrating the bone of the neural canal.
  • Later in ontegeny, additional arteries penetrate the centra, leaving vascular foramina (small holes carrying blood vessels).
  • This hand-off does not always run to completion, due to the variability of blood vessels.
  • In extant birds, when pneumatic diverticula enter the bone they do so via vascular foramina, alongside blood vessels.
  • The same was probaby true in sauropods.
  • So in vertebrae that got all their blood supply from vascular foramina in the neural canal, diverticula were unable to enter the centra from the outside.
  • So those centra were never pneumatized from the outside, and no externally visible pneumatic cavities were formed.

Somehow that pretty straightforward argument ended up running to eleven pages. I guess that’s what you get when you reference your thoughts thoroughly, illustrate them in detail, and discuss the implications. But the heart of the paper is that little bullet-list.

Taylor and Wedel (2021: Figure 6). Domestic duck Anas platyrhynchos, dorsal vertebrae 2–7 in left lateral view. Note that the two anteriormost vertebrae (D2 and D3) each have a shallow pneumatic fossa penetrated by numerous small foramina.

(What is the relevance of these duck dorsals? You will need to read the discussion in the paper to find out!)

Our choice of publication venue

The world moves fast. It’s strange to think that only eleven years ago my Brachiosaurus revision (Taylor 2009) was in the Journal of Vertebrate Palaeontology, a journal that now feels very retro. Since then, Matt and I have both published several times in PeerJ, which we love. More recently, we’ve been posting preprints of our papers — and indeed I have three papers stalled in peer-review revisions that are all available as preprints (two Taylor and Wedels and a single sole-authored one). But this time we’re pushing on even further into the Shiny Digital Future.

We’ve published at Qeios. (It’s pronounced “chaos”, but the site doesn’t tell you that; I discovered it on Twitter.) If you’ve not heard of it — I was only very vaguely aware of it myself until this evening — it runs on the same model as the better known F1000 Research, with this very important difference: it’s free. Also, it looks rather slicker.

That model is: publish first, then filter. This is the opposite of the traditional scholarly publishing flow where you filter first — by peer reviewers erecting a series of obstacles to getting your work out — and only after negotiating that course to do get to see your work published. At Qeios, you go right ahead and publish: it’s available right off the bat, but clearly marked as awaiting peer-review:

And then it undergoes review. Who reviews it? Anyone! Ideally, of course, people with some expertise in the relevant fields. We can then post any number of revised versions in response to the reviews — each revision having its own DOI and being fixed and permanent.

How will this work out? We don’t know. It is, in part, an experiment. What will make it work — what will impute credibility to our paper — is good, solid reviews. So if you have any relevant expertise, we do invite you to get over there and write a review.

And finally …

Matt noted that I first sent him the link to the Qeios site at 7:44 pm my time. I think that was the first time he’d heard of it. He and I had plenty of back and forth on where to publish this paper before I pushed on and did it at Qeios. And I tweeted that our paper was available for review at 8:44 — one hour exactly after Matt learned that the venue existed. Now here we are at 12:04 my time, three hours and 20 minutes later, and it’s already been viewed 126 times and downloaded 60 times. I think that’s pretty awesome.

References

  • Taylor, Michael P. 2009. A re-evaluation of Brachiosaurus altithorax Riggs 1903 (Dinosauria, Sauropoda) and its generic separation from Giraffatitan brancai (Janensch 1914). Journal of Vertebrate Paleontology 29(3):787-806. [PDF]
  • Taylor, Michael P., and Mathew J. Wedel. 2021. Why is vertebral pneumaticity in sauropod dinosaurs so variable? Qeios 1G6J3Q. doi: 10.32388/1G6J3Q [PDF]
  • Wedel, Mathew J., and Michael P. Taylor 2013b. Caudal pneumaticity and pneumatic hiatuses in the sauropod dinosaurs Giraffatitan and Apatosaurus. PLOS ONE 8(10):e78213. 14 pages. doi: 10.1371/journal.pone.0078213 [PDF]

Back in 2005, three years before their paper on the WDC Supersaurus known as Jimbo was published, Lovelace at al. presented their work as a poster at the annual SVP meeting. The abstract for that poster appeared, as usual, in the abstracts book that came as a supplement to JVP 25 issue 3. But the poster itself was never published — which is a shame, as it contains some useful images that didn’t make it into the descriptive paper (Lovelace et al. 2008).

With Dave and Scott’s blessing, here it is! Click through for full resolution, of course.

And here’s the abstract as it appeared in print (Lovelace et al. 2005):

REVISED OSTEOLOGY OF SUPERSAURUS VIVIANAE

LOVELACE, David, HARTMAN, Scott, WAHL, William, Wyoming Dinosaur Center, Thermopolis, WY

A second, and more complete, associated specimen of Supersaurus vivianae (WDC-DMJ021) was discovered in the Morrison Formation of east-central Wyoming in a single sauropod locality. The skeleton provides a more complete picture of the osteology of S. vivianae, including a surprising number of apatosaurine characteristics. The caudals have heart shaped centra that lack a ventral longitudinal hollow, and the rectangular distal neural spines of the anterior caudals are mediolaterally expanded similar to Apatosaurus excelsus. The centra of the anterior caudals are procoelous as in other diplodocids, but the posterior ball is very weakly pronounced. The robusticity of the tibiae and fibulae are intermediate between Apatosaurus and diplodocines. The cervical vertebrae demonstrate classic diplodocine elongation with an elongation index ranging from 4 to 7.5. All 7 of the new cervicals have a centrum length that exceeds 1 meter. Mid-posterior cervicals are semicamellate at mid-centra near the pneumatic foramina. The dorsal vertebrae exhibit a high degree of elaboration on laminae, and extremely rugose pre and postspinal laminae. Costal elements are robust, with complex pneumatic innervations in the rib head. Although unknown in other diplodocids, early reports described pneumatic ribs in an A. excelsus; unfortunately the described specimen is unavailable.

Inclusion of lesser-known North American diplodocids such as Supersaurus, Seismosaurus and Suuwassea in phyolgenetic studies, may provide a framework for better understanding North American diplodocid evolution.

Many thanks to Dave and Scott for permission to share this important poster more widely. (Publish your posters, people! That option didn’t exist in 2005, but it does now!)

References

  • Lovelace, David M., Scott A. Hartman and William R. Wahl. 2005. Revised Osteology of Supersaurus vivanae (SVP poster). Journal of Vertebrate Paleontology 25(3):84A–85A.
  • Lovelace, David M., Scott A. Hartman and William R. Wahl. 2008. Morphology of a specimen of Supersaurus (Dinosauria, Sauropoda) from the Morrison Formation of Wyoming, and a re-evaluation of diplodocid phylogeny. Arquivos do Museu Nacional, Rio de Janeiro 65(4):527–544.

If you don’t get to give a talk at a meeting, you get bumped down to a poster. That’s what’s happened to Matt, Darren and me at this year’s SVPCA, which is coming up next week. My poster is about a weird specimen that Matt and I have been informally calling “Biconcavoposeidon” (which I remind you is not a formal taxonomic name).

Here it is, for those of you who won’t be at the meeting (or who just want a preview):

But wait — there’s more. The poster is now also formally published (Taylor and Wedel 2017) as part of the PeerJ preprint containing the conference abstract. It has a DOI and everything. I’m happy enough about it that I’m now citing it in my CV.

Do scientific posters usually get published? Well, no. But why not? I can’t offhand think of a single example of a published poster, though there must be some out there. They are, after all, legitimate research artifacts, and typically contain more information than published abstracts. So I’m happy to violate that norm.

Folks: it’s 2017. Publish your posters.

References

  • Taylor, Michael P., and Mathew J. Wedel. 2017. A unique Morrison-Formation sauropod specimen with biconcave dorsal vertebrae. p. 78 in: Abstract Volume: The 65th Symposium on Vertebrate Palaeontology and Comparative Anatomy & The 26th Symposium on Palaeontological Preparation and Conservation. University of Birmingham: 12th–15th September 2017. 79 pp. PeerJ preprint 3144v2. doi:10.7287/peerj.preprints.3144v2/supp-1

[Note: Mike asked me to scrape a couple of comments on his last post – this one and this one – and turn them into a post of their own. I’ve edited them lightly to hopefully improve the flow, but I’ve tried not to tinker with the guts.]

This is the fourth in a series of posts on how researchers might better be evaluated and compared. In the first post, Mike introduced his new paper and described the scope and importance of the problem. Then in the next post, he introduced the idea of the LWM, or Less Wrong Metric, and the basic mathemetical framework for calculating LWMs. Most recently, Mike talked about choosing parameters for the LWM, and drilled down to a fundamental question: (how) do we identify good research?

Let me say up front that I am fully convicted about the problem of evaluating researchers fairly. It is a question of direct and timely importance to me. I serve on the Promotion & Tenure committees of two colleges at Western University of Health Sciences, and I want to make good decisions that can be backed up with evidence. But anyone who has been in academia for long knows of people who have had their careers mangled, by getting caught in institutional machinery that is not well-suited for fairly evaluating scholarship. So I desperately want better metrics to catch on, to improve my own situation and those of researchers everywhere.

For all of those reasons and more, I admire the work that Mike has done in conceiving the LWM. But I’m pretty pessimistic about its future.

I think there is a widespread misapprehension that we got here because people and institutions were looking for good metrics, like the LWM, and we ended up with things like impact factors and citation counts because no-one had thought up anything better. Implying a temporal sequence of:

1. Deliberately looking for metrics to evaluate researchers.
2. Finding some.
3. Trying to improve those metrics, or replace them with better ones.

I’m pretty sure this is exactly backwards: the metrics that we use to evaluate researchers are mostly simple – easy to explain, easy to count (the hanky-panky behind impact factors notwithstanding) – and therefore they spread like wildfire, and therefore they became used in evaluation. Implying a very different sequence:

1. A metric is invented, often for a reason completely unrelated to evaluating researchers (impact factors started out as a way for librarians to rank journals, not for administration to rank faculty!).
2. Because a metric is simple, it becomes widespread.
3. Because a metric is both simple and widespread, it makes it easy to compare people in wildly different circumstances (whether or not that comparison is valid or defensible!), so it rapidly evolves from being trivia about a researcher, to being a defining character of a researcher – at least when it comes to institutional evaluation.

If that’s true, then any metric aimed for wide-scale adoption needs to be as simple as possible. I can explain the h-index or i10 index in one sentence. “Citation count” is self-explanatory. The fundamentals of the impact factor can be grasped in about 30 seconds, and even the complicated backstory can be conveyed in about 5 minutes.

In addition to being simple, the metric needs to work the same way across institutions and disciplines. I can compare my h-index with that of an endowed chair at Cambridge, a curator at a small regional museum, and a postdoc at Podunk State, and it Just Works without any tinkering or subjective decisions on the part of the user (other than What Counts – but that affects all metrics dealing with publications, so no one metric is better off than any other on that score).

I fear that the LWM as conceived in Taylor (2016) is doomed, for the following reasons:

  • It’s too complex. It would probably be doomed if it had just a single term with a constant and an exponent (which I realize would defeat the purpose of having either a constant or an exponent), because that’s more math than either an impact factor or an h-index requires (perceptively, anyway – in the real world, most people’s eyes glaze over when the exponents come out).
  • Worse, it requires loads of subjective decisions and assigning importance on the part of the users.
  • And fatally, it would require a mountain of committee work to sort that out. I doubt if I could get the faculty in just one department to agree on a set of terms, constants, and exponents for the LWM, much less a college, much less a university, much less all of the universities, museums, government and private labs, and other places where research is done. And without the promise of universal applicability, there’s no incentive for any institution to put itself through the hell of work it would take to implement.

Really, the only way I think the LWM could get into place is by fiat, by a government body. If the EPA comes up with a more complicated but also more accurate way to measure, say, airborne particle output from car exhausts, they can theoretically say to the auto industry, “Meet this standard or stop selling cars in the US” (I know there’s a lot more legislative and legal push and pull than that, but it’s at least possible). And such a standard might be adopted globally, either because it’s a good idea so it spreads, or because the US strong-arms other countries into following suit.

Even if I trusted the US Department of Education to fill in all of the blanks for an LWM, I don’t know that they’d have the same leverage to get it adopted. I doubt that the DofE has enough sway to get it adopted even across all of the educational institutions. Who would want that fight, for such a nebulous pay-off? And even if it could be successfully inflicted on educational institutions (which sounds negative, but that’s precisely how the institutions would see it), what about the numerous and in some cases well-funded research labs and museums that don’t fall under the DofE’s purview? And that’s just in the US. The culture of higher education and scholarship varies a lot among countries. Which may be why the one-size-fits-all solutions suck – I am starting to wonder if a metric needs to be broken, to be globally applicable.

The problem here is that the user base is so diverse that the only way metrics get adopted is voluntarily. So the challenge for any LWM is to be:

  1. Better than existing metrics – this is the easy part – and,
  2. Simple enough to be both easily grasped, and applied with minimal effort. In Malcolm Gladwell Tipping Point terms, it needs to be “sticky”. Although a better adjective for passage through the intestines of academia might be “smooth” – that is, having no rough edges, like exponents or overtly subjective decisions*, that would cause it to snag.

* Calculating an impact factor involves plenty of subjective decisions, but it has the advantages that (a) the users can pretend otherwise, because (b) ISI does the ‘work’ for them.

At least from my point of view, the LWM as Mike has conceived it is awesome and possibly unimprovable on the first point (in that practically any other metric could be seen as a degenerate case of the LWM), but dismal and possibly pessimal on the second one, in that it requires mounds of subjective decision-making to work at all. You can’t even get a default number and then iteratively improve it without investing heavily in advance.

An interesting thought experiment would be to approach the problem from the other side: invent as many new simple metrics as possible, and then see if any of them offer advantages over the existing ones. Although I have a feeling that people are already working on that, and have been for some time.

Simple, broken metrics like impact factor are the prions of scholarship. Yes, viruses are more versatile and cells more versatile still, by orders of magnitude, but compared to prions, cells take an awesome amount of effort to build and maintain. If you just want to infect someone and you don’t care how, prions are very hard to beat. And they’re so subtle in their machinations that we only became aware of them comparatively recently – much like the emerging problems with “classical” (e.g., non-alt) metrics.

I’d love to be wrong about all of this. I proposed the strongest criticism of the LWM I could think of, in hopes that someone would come along and tear it down. Please start swinging.

You’ll remember that in the last installment (before Matt got distracted and wrote about archosaur urine), I proposed a general schema for aggregating scores in several metrics, terming the result an LWM or Less Wrong Metric. Given a set of n metrics that we have scores for, we introduce a set of n exponents ei which determine how we scale each kind of score as it increases, and a set of n factors ki which determine how heavily we weight each scaled score. Then we sum the scaled results:

LWM = k1·x1e1 + k2·x2e2 + … + kn·xnen

“That’s all very well”, you may ask, “But how do we choose the parameters?”

Here’s what I proposed in the paper:

One approach would be to start with subjective assessments of the scores of a body of researchers – perhaps derived from the faculty of a university confidentially assessing each other. Given a good-sized set of such assessments, together with the known values of the metrics x1, x2xn for each researcher, techniques such as simulated annealing can be used to derive the values of the parameters k1, k2kn and e1, e2en that yield an LWM formula best matching the subjective assessments.

Where the results of such an exercise yield a formula whose results seem subjectively wrong, this might flag a need to add new metrics to the LWM formula: for example, a researcher might be more highly regarded than her LWM score indicates because of her fine record of supervising doctoral students who go on to do well, indicating that some measure of this quality should be included in the LWM calculation.

I think as a general approach that is OK: start with a corpus of well understood researchers, or papers, whose value we’ve already judged a priori by some means; then pick the parameters that best approximate that judgement; and let those parameters control future automated judgements.

The problem, really, is how we make that initial judgement. In the scenario I originally proposed, where say the 50 members of a department each assign a confidential numeric score to all the others, you can rely to some degree on the wisdom of crowds to give a reasonable judgement. But I don’t know how politically difficult it would be to conduct such an exercise. Even if the individual scorers were anonymised, the person collating the data would know the total scores awarded to each person, and it’s not hard to imagine that data being abused. In fact, it’s hard to imagine it not being abused.

In other situations, the value of the subjective judgement may be close to zero anyway. Suppose we wanted to come up with an LWM that indicates how good a given piece of research is. We choose LWM parameters based on the scores that a panel of experts assign to a corpus of existing papers, and derive our parameters from that. But we know that experts are really bad at assessing the quality of research. So what would our carefully parameterised LWM be approximating? Only the flawed judgement of flawed experts.

Perhaps this points to an even more fundamental problem: do we even know what “good research” looks like?

It’s a serious question. We all know that “research published in high-Impact Factor journals” is not the same thing as good research. We know that “research with a lot of citations” is not the same thing as good research. For that matter, “research that results in a medical breakthrough” is not necessarily the same thing as good research. As the new paper points out:

If two researchers run equally replicable tests of similar rigour and statistical power on two sets of compounds, but one of them happens to have in her batch a compound that turns out to have useful properties, should her work be credited more highly than the similar work of her colleague?

What, then? Are we left only with completely objective measurements, such as statistical power, adherance to the COPE code of conduct, open-access status, or indeed correctness of spelling?

If we accept that (and I am not arguing that we should, at least not yet), then I suppose we don’t even need an LWM for research papers. We can just count these objective measures and call it done.

I really don’t know what my conclusions are here. Can anyone help me out?

I said last time that my new paper on Better ways to evaluate research and researchers proposes a family of Less Wrong Metrics, or LWMs for short, which I think would at least be an improvement on the present ubiquitous use of impact factors and H-indexes.

What is an LWM? Let me quote the paper:

The Altmetrics Manifesto envisages no single replacement for any of the metrics presently in use, but instead a palette of different metrics laid out together. Administrators are invited to consider all of them in concert. For example, in evaluating a researcher for tenure, one might consider H-index alongside other metrics such as number of trials registered, number of manuscripts handled as an editor, number of peer-reviews submitted, total hit-count of posts on academic blogs, number of Twitter followers and Facebook friends, invited conference presentations, and potentially many other dimensions.

In practice, it may be inevitable that overworked administrators will seek the simplicity of a single metric that summarises all of these.

This is a key problem of the world we actually live in. We often bemoan that fact that people evaluating research will apparently do almost anything than actually read the research. (To paraphrase Dave Barry, these are important, busy people who can’t afford to fritter away their time in competently and diligently doing their job.) There may be good reasons for this; there may only be bad reasons. But what we know for sure is that, for good reasons or bad, administrators often do want a single number. They want it so badly that they will seize on the first number that comes their way, even if it’s as horribly flawed as an impact factor or an H-index.

What to do? There are two options. One is the change the way these overworked administrators function, to force them to read papers and consider a broad range of metrics — in other words, to change human nature. Yeah, it might work. But it’s not where the smart money is.

So perhaps the way to go is to give these people a better single number. A less wrong metric. An LWM.

Here’s what I propose in the paper.

In practice, it may be inevitable that overworked administrators will seek the simplicity of a single metric that summarises all of these. Given a range of metrics x1, x2xn, there will be a temptation to simply add them all up to yield a “super-metric”, x1 + x2 + … + xn. Such a simply derived value will certainly be misleading: no-one would want a candidate with 5,000 Twitter followers and no publications to appear a hundred times stronger than one with an H-index of 50 and no Twitter account.

A first step towards refinement, then, would weight each of the individual metrics using a set of constant parameters k1, k2kn to be determined by judgement and experiment. This yields another metric, k1·x1 + k2·x2 + … + kn·xn. It allows the down-weighting of less important metrics and the up-weighting of more important ones.

However, even with well-chosen ki parameters, this better metric has problems. Is it really a hundred times as good to have 10,000 Twitter followers than 100? Perhaps we might decide that it’s only ten times as good – that the value of a Twitter following scales with the square root of the count. Conversely, in some contexts at least, an H-index of 40 might be more than twice as good as one of 20. In a search for a candidate for a senior role, one might decide that the value of an H-index scales with the square of the value; or perhaps it scales somewhere between linearly and quadratically – with H-index1.5, say. So for full generality, the calculation of the “Less Wrong Metric”, or LWM for short, would be configured by two sets of parameters: factors k1, k2kn, and exponents e1, e2en. Then the formula would be:

LWM = k1·x1e1 + k2·x2e2 + … + kn·xnen

So that’s the idea of the LWM — and you can see now why I refer to this as a family of metrics. Given n metrics that you’re interested in, you pick 2n parameters to combine them with, and get a number that to some degree measures what you care about.

(How do you choose your 2n parameters? That’s the subject of the next post. Or, as before, you can skip ahead and read the paper.)

References

Like Stephen Curry, we at SV-POW! are sick of impact factors. That’s not news. Everyone now knows what a total disaster they are: how they are signficantly correlated with retraction rate but not with citation count; how they are higher for journals whose studies are less statistically powerful; how they incentivise bad behaviour including p-hacking and over-hyping. (Anyone who didn’t know all that is invited to read Brembs et al.’s 2013 paper Deep impact: unintended consequences of journal rank, and weep.)

Its 2016. Everyone who’s been paying attention knows that impact factor is a terrible, terrible metric for the quality of a journal, a worse one for the quality of a paper, and not even in the park as a metric for the quality of a researcher.

Unfortunately, “everyone who’s been paying attention” doesn’t seem to include such figures as search committees picking people for jobs, department heads overseeing promotion, tenure committees deciding on researchers’ job security, and I guess granting bodies. In the comments on this blog, we’ve been told time and time and time again — by people who we like and respect — that, however much we wish it weren’t so, scientists do need to publish in high-IF journals for their careers.

What to do?

It’s a complex problem, not well suited to discussion on Twitter. Here’s what I wrote about it recently:

The most striking aspect of the recent series of Royal Society meetings on the Future of Scholarly Scientific Communication was that almost every discussion returned to the same core issue: how researchers are evaluated for the purposes of recruitment, promotion, tenure and grants. Every problem that was discussed – the disproportionate influence of brand-name journals, failure to move to more efficient models of peer-review, sensationalism of reporting, lack of replicability, under-population of data repositories, prevalence of fraud – was traced back to the issue of how we assess works and their authors.

It is no exaggeration to say that improving assessment is literally the most important challenge facing academia.

This is from the introduction to a new paper which came out today: Taylor (2016), Better ways to evaluate research and researchers. In eight short pages — six, really, if you ignore the appendix — I try to get to grips with the historical background that got us to where we are, I discuss some of the many dimensions we should be using to evaluate research and researchers, and I propose a family of what I call Less Wrong Metrics — LWMs — that administrators could use if they really absolutely have to put a single number of things.

(I was solicited to write this by SPARC Europe, I think in large part because of things I have written around this subject here on SV-POW! My thanks to them: this paper becomes part of their Briefing Papers series.)

Next time I’ll talk about the LWM and how to calculate it. Those of you who are impatient might want to read the actual paper first!

References

Re-reading an email that Matt sent me back in January, I see this:

One quick point about [an interesting sauropod specimen]. I can envision writing that up as a short descriptive paper, basically to say, “Hey, look at this weird thing we found! Morrison sauropod diversity is still underestimated!” But I honestly doubt that we’ll ever get to it — we have literally years of other, more pressing work in front of us. So maybe we should just do an SV-POW! post about the weirdness of [that specimen], so that the World Will Know.

Although as soon as I write that, I think, “Screw that, I’m going to wait until I’m not busy* and then just take a single week* and rock out a wiper* on it.”

I realize that this way of thinking represents a profound and possibly psychotic break with reality. *Thrice! But it still creeps up on me.

(For anyone not familiar with the the “wiper”, it refers to a short paper of only one or two pages. The etymology is left as an exercise to the reader.)

It’s just amazing how we keep on and on falling for this delusion that we can get a paper out quickly, even when we know perfectly well, going into the project, that it’s not going to work out that way. To pick a recent example, my paper on quantifying the effect of intervertebral cartilage on neutral posture was intended to be literally one page, an addendum to the earlier paper on cartilage: title, one paragraph of intro, diagram, equation, single reference, DONE! Instead, it landed up being 11 pages long with five illustrations and two tables.

I think it’s a reasonable approximation to say that any given project will require about an order of magnitude more work than we expect at the outset.

Even as I write this, the top of my palaeo-work priority list is a paper that I’m working on with Matt and two other colleagues, which he kicked off on 6 May, writing:

I really, really want to kill this off absolutely ASAP. Like, seriously, within a week or two. Is that cool? Is that doable?

To which I idiotically replied:

IT SHALL BE SO!

A month and a bit later, the answers to Matt’s questions are clear. Yes, it’s cool; and no, it’s not doable.

The thing is, I think that’s … kind of OK. The upshot is that we end up writing reasonably substantial papers, which is after all what we’re meant to be trying to do. If the reasonably substantial papers that end up getting written aren’t necessarily the ones we thought they were going to be, well, that’s not a problem. After all, as I’ve noted before, my entire Ph.D dissertation was composed of side-projects, and I never got around to doing the main project. That’s fine.

In 2011, Matt’s tutorial on how to find problems to work on discussed in detail how projects grow and mutate and anastamose. I’m giving up on thinking that this is a bad thing, abandoning the idea that I ought to be in control of my own research program. I’m just going to keep chasing whatever rabbits look good to me at the time, and see what happens.

Onwards!

I’ll try to live-blog the first day of part 2 of the Royal Society’s Future of Scholarly Scientific Communication meeting, as I did for the first day of part 1. We’ll see how it goes.

Here’s the schedule for today and tomorrow.

Session 1: the reproducibility problem

Chair: Alex Halliday, vice-president of the Royal Society

Introduction to reproducibility. What it means, how to achieve it, what role funding organisations and publishers might play.

For an introduction/overview, see #FSSC – The role of openness and publishers in reproducible research.

Michele Dougherty, planetary scientist

It’s very humbling being at this meeting, when it’s so full of people who have done astonishing things. For example, Dougherty discovered an atmosphere around one of Saturn’s moons by an innovative use of magnetic field data. So many awesome people.

Her work is largely to do with very long-term project involving planetary probes, e.g. the Cassini-Huygens probe. It’s going to be interesting to know what can be said about reproducibility of experiments that take decades and cost billions.

“The best science output you can obtain is as a result of collaboration with lots of different teams.”

Application of reproducibility here is about making the data from the probes available to the scientific community — and the general public — so that the result of analysis can be reproduced. So not experimental replication.

Such data often has a proprietary period (essentially an embargo) before its public release, partly because it’s taken 20 years to obtain and the team that did this should get the first crack at it. But it all has to be made publicly available.

Dorothy Bishop, chair of Academy of Medical Sciences group on replicability

The Royal Society is very much not the first to be talking about replicability — these discussions have been going on for years.

About 50% of studies in Bishop’s field are capable of replication. Numbers are even worse in some fields. Replication of drug trials are particularly important, as false result kill people.

Journals cause awful problems with impact-chasing: e.g. high-impact journals will publish sexy-looking autism studies with tiny samples, which no reputable medical journal would publish.

Statistical illiteracy is very widespread. Authors can give the impression of being statistically aware but in a superficial way.

Too much HARKing going on (Hypothesising After Results Known — searching a dataset for anything that looks statistically significant in the shallow p < 0.05 sense.)

“It’s just assumed that people doing research, know what they are doing. Often that’s just not the case.”

many more criticisms of how the journal system encourages bad research. They’re coming much faster than I can type them. This is a storming talk, I wish the record would be made available.

Employers are also to blame for prioritising expensive research proposals (= large grants) over good ones.

All of this causes non-replicable science.

Floor discussion

Lots of great stuff here that I just can’t capture, sorry. Best follow the tweet stream for the fast-moving stuff.

One highlight: Pat Brown thinks it’s not necessarily a problem if lots of statistically underpowered studies are performed, so long as they’re recognised as such. Dorothy Bishop politely but emphatically disagrees: they waste resources, and produce results that are not merely useless but actively wrong and harmful.

David Colhoun comments from the floor: while physical sciences consider “significant results” to be five sigmas (p < 0.000001), biomed is satisfied with slightly less than two sigmas (p < 0.05) which really should be interpreted only as “worth another look”.

Dorothy Bishop on publishing data, and authors’ reluctance to do so: “It should be accepted as a cultural norm that mistakes in data do happen, rather than shaming people who make data open.”

Coffee break

Nothing to report :-)

Session 2: what can be done to improve reproducibility?

Iain Hrynaszkiewicz, head of data, Nature

In an analysis of retractions of papers in PubMed Central, 2/3 were due to fraud and 20% due to error.

Access to methods and data is a prerequisite for replicability.

Pre-registration, sharing of data, reporting guidelines all help.

“Open access is important, but it’s only part of the solution. Openness is a means to an end.”

Hrynaszkiewicz says text-miners are a small minority of researchers. [That is true now, but I and others are confident this will change rapidly as the legal and technical barriers are removed: it has to, since automated reading is the only real solution to the problem of keeping up with an exponentially growing literature. — Ed.]

Floor discussion