September 18, 2016
I have before me the reviews for a submission of mine, and the handling editor has provided an additional stipulation:
Authority and date should be provided for each species-level taxon at first mention. Please ensure that the nominal authority is also included in the reference list.
In other words, the first time I mention Diplodocus, I should say “Diplodocus Marsh 1878″; and I should add the corresponding reference to my bibliography.
What do we think about this?
I used to do this religiously in my early papers, just because it was the done thing. But then I started to think about it. To my mind, it used to make a certain amount of sense 30 years ago. But surely in 2016, if anyone wants to know about the taxonomic history of Diplodocus, they’re going to go straight to Wikipedia?
I’m also not sure what the value is in providing the minimal taxonomic-authority information rather then, say, morphological information. Anyone who wants to know what Diplodocus is would be much better to go to Hatcher 1901, so wouldn’t we serve readers better if we referred to “Diplodocus (Hatcher 1901)”
Now that I come to think of it, I included “Giving the taxonomic authority after first use of each formal name” in my list of
Idiot things that we we do in our papers out of sheer habit three and a half years ago.
Should I just shrug and do this pointless busywork to satisfy the handling editor? Or should I simply refuse to waste my time adding information that will be of no use to anyone?
- Hatcher, Jonathan B. 1901. Diplodocus (Marsh): its osteology, taxonomy and probable habits, with a restoration of the skeleton. Memoirs of the Carnegie Museum 1:1-63 and plates I-XIII.
- Marsh, O. C. 1878. Principal characters of American Jurassic dinosaurs, Part I. American Journal of Science, series 3 16:411-416.
September 14, 2016
Long-time SV-POW! readers will remember that three years ago, full of enthusiasm after speaking about Barosaurus at the Edinburgh SVPCA, Matt and I got that talk written up in double-quick time and had it published as a PeerJ Preprint in less than three weeks. Very quickly, the preprint attracted substantive, helpful reviews: three within the first 24 hours, and several more in the next few days.
This was great: it gave us the opportunity to handle those review comments and get the manuscript turned around into an already-reviewed formal journal submission in less then a month from the original talk.
So of course what we did instead was: nothing. For three years.
I can’t excuse that. I can’t even explain it. It’s not as though we’ve spent those three years churning out a torrent of other awesome papers. We’ve both just been … a bit lame.
Anyway, here’s a story that will be hauntingly familiar. A month ago, full of enthusiasm after speaking about Barosaurus at the Liverpool SVPCA, Matt and I found ourselves keen to write up that talk in double-quick time. It’s an exciting tale of new specimens, reinterpretation of an important old specimen, and a neck eight times as long as that 0f a world-record giraffe.
But it would be crazy to write the new Barosaurus paper without first having dealt with the old Barosaurus paper. So now, finally, three years on, we’ve done that. Version 2 of the preprint is now available (Taylor and Wedel 2016), incorporating all the fine suggestions of the people who reviewed the first version — and with a slightly spiffed-up title. What’s more, the new version has also been submitted for formal peer-review. (In retrospect, I can’t think why we didn’t do that when we put the first preprint up.)
A big part of the purpose of this post is to thank Emanuel Tschopp, Mark Robinson, Andy Farke, John Foster and Mickey Mortimer for their reviews back in 2013. I know it’s overdue, but they are at least all acknowledged in the new version of the manuscript.
Now we cross our fingers, and hope that the formally solicited reviews for the new version of the manuscript are as helpful and constructive as the reviews in that first round. Once those reviews are in, we should be able to move quickly and painlessly to a formally published version of this paper. (I know, I know — I shouldn’t offer such a hostage to fortune.)
Meanwhile, I will finally be working on handling the reviews of this other PeerJ submission, which I received back in October last year. Yes, I have been lax; but I am back in the saddle now.
- Taylor, Michael P., and Mathew J. Wedel. 2016. The neck of Barosaurus: longer, wider and weirder than those of Diplodocus and other diplodocines. PeerJ PrePrints 1:e67v2 doi:10.7287/peerj.preprints.67v2
Long time readers may remember the stupid contortions I had to go through in order to avoid giving the Geological Society copyright in my 2010 paper about the history of sauropod research, and how the Geol. Soc. nevertheless included a fraudulent claim of copyright ownership in the published version.
The way I left it back in 2010, my wife, Fiona, was the copyright holder. I should have fixed this a while back, but I now note for the record that she has this morning assigned copyright back to me:
From: Fiona Taylor <REDACTED>
To: Mike Taylor <email@example.com>
Date: 15 August 2016 at 11:03
I, Fiona J. Taylor of Oakleigh Farm House, Crooked End, Ruardean, GL17 9XF, England, hereby transfer to you, Michael P. Taylor of Oakleigh Farm House, Crooked End, Ruardean, GL17 9XF, England, the copyright of your article “Sauropod dinosaur research: a historical review”. This email constitutes a legally binding transfer.
Sorry to post something so boring, after so long a gap (nearly a month!) Hopefully we’ll have some more interesting things to say — and some time to say them — soon!
June 20, 2016
Back in mid-April, when I (Mike) was at the OSI2016 conference, I was involved in the “Moral Dimensions of Open” group. (It was in preparation for this that wrote the Moral Dimensions series of posts here on SV-POW!.)
Like all the other groups, ours was tasked with making a presentation to the plenary session, taking questions and feedback, and presenting a version 2 on the final day. Here’s the title page that I contributed.
Each group was also asked to write a short paper summarising their discussions and conclusions, with all the papers to be published openly. The resulting papers are now available: sixteen of them in all. And among them is Ansolabehere et al. (2016), “The Moral Dimensions of Open”, of which I am one of nine authors. (There were ten authors of the presentation: for some reason, Ryan Merkley is not on the paper.)
As you can imagine in a group that contained open-access advocates, human rights activists, representatives of both old-school and new-wave publishers, agriculturalists and more, consensus was far from unanimous, and it was quite a rocky road to arriving at a form of the paper that we could all live with. In this case, the standard note that was added to all the papers is very appropriate:
This document reflects the combined input of the authors listed here (in alphabetical order by last name) as well as contributions from other OSI2016 delegates. The findings and recommendations expressed herein do not necessarily reflect the opinions of the individual authors listed here, nor their agencies, trustees, officers, or staff.
Is this the moral-dimensions paper I would have written? No, it’s not. Being a nine-way collaboration, it pulls in too many directions to have as clear a through-line as I’d like; and it’s arguably a bit mealy-mouthed in places. But over all, I am pretty happy with it. I think it makes some important points, and makes them reasonably well given the sometimes clumsy prose that you always get when something is written by committee.
Anyway, I think it’s worth a read.
By the way, I’d like to place on record my thanks to Cheryl Ball of West Virginia University, who did the bulk of the heavy lifting in putting together both the presentation and the paper. While everyone in the group contributed ideas and many contributed prose, Cheryl dug in and did the actual work. Really, she deserves to be lead author on this paper — and would be, but for the alphabetical-order convention.
- Ansolabehere, Karina, Cheryl Ball, Medha Devare, Tee Guidotti, Bill Priedhorsky, Wim van der Stelt, Mike Taylor, Susan Veldsman and John Willinsky. 2016. The Moral Dimensions of Open. Open Scholarship Initiative Proceedings 1 (5 pages). doi:10.13021/G8SW2G
[Note: Mike asked me to scrape a couple of comments on his last post – this one and this one – and turn them into a post of their own. I’ve edited them lightly to hopefully improve the flow, but I’ve tried not to tinker with the guts.]
This is the fourth in a series of posts on how researchers might better be evaluated and compared. In the first post, Mike introduced his new paper and described the scope and importance of the problem. Then in the next post, he introduced the idea of the LWM, or Less Wrong Metric, and the basic mathemetical framework for calculating LWMs. Most recently, Mike talked about choosing parameters for the LWM, and drilled down to a fundamental question: (how) do we identify good research?
Let me say up front that I am fully convicted about the problem of evaluating researchers fairly. It is a question of direct and timely importance to me. I serve on the Promotion & Tenure committees of two colleges at Western University of Health Sciences, and I want to make good decisions that can be backed up with evidence. But anyone who has been in academia for long knows of people who have had their careers mangled, by getting caught in institutional machinery that is not well-suited for fairly evaluating scholarship. So I desperately want better metrics to catch on, to improve my own situation and those of researchers everywhere.
For all of those reasons and more, I admire the work that Mike has done in conceiving the LWM. But I’m pretty pessimistic about its future.
I think there is a widespread misapprehension that we got here because people and institutions were looking for good metrics, like the LWM, and we ended up with things like impact factors and citation counts because no-one had thought up anything better. Implying a temporal sequence of:
1. Deliberately looking for metrics to evaluate researchers.
2. Finding some.
3. Trying to improve those metrics, or replace them with better ones.
I’m pretty sure this is exactly backwards: the metrics that we use to evaluate researchers are mostly simple – easy to explain, easy to count (the hanky-panky behind impact factors notwithstanding) – and therefore they spread like wildfire, and therefore they became used in evaluation. Implying a very different sequence:
1. A metric is invented, often for a reason completely unrelated to evaluating researchers (impact factors started out as a way for librarians to rank journals, not for administration to rank faculty!).
2. Because a metric is simple, it becomes widespread.
3. Because a metric is both simple and widespread, it makes it easy to compare people in wildly different circumstances (whether or not that comparison is valid or defensible!), so it rapidly evolves from being trivia about a researcher, to being a defining character of a researcher – at least when it comes to institutional evaluation.
If that’s true, then any metric aimed for wide-scale adoption needs to be as simple as possible. I can explain the h-index or i10 index in one sentence. “Citation count” is self-explanatory. The fundamentals of the impact factor can be grasped in about 30 seconds, and even the complicated backstory can be conveyed in about 5 minutes.
In addition to being simple, the metric needs to work the same way across institutions and disciplines. I can compare my h-index with that of an endowed chair at Cambridge, a curator at a small regional museum, and a postdoc at Podunk State, and it Just Works without any tinkering or subjective decisions on the part of the user (other than What Counts – but that affects all metrics dealing with publications, so no one metric is better off than any other on that score).
I fear that the LWM as conceived in Taylor (2016) is doomed, for the following reasons:
- It’s too complex. It would probably be doomed if it had just a single term with a constant and an exponent (which I realize would defeat the purpose of having either a constant or an exponent), because that’s more math than either an impact factor or an h-index requires (perceptively, anyway – in the real world, most people’s eyes glaze over when the exponents come out).
- Worse, it requires loads of subjective decisions and assigning importance on the part of the users.
- And fatally, it would require a mountain of committee work to sort that out. I doubt if I could get the faculty in just one department to agree on a set of terms, constants, and exponents for the LWM, much less a college, much less a university, much less all of the universities, museums, government and private labs, and other places where research is done. And without the promise of universal applicability, there’s no incentive for any institution to put itself through the hell of work it would take to implement.
Really, the only way I think the LWM could get into place is by fiat, by a government body. If the EPA comes up with a more complicated but also more accurate way to measure, say, airborne particle output from car exhausts, they can theoretically say to the auto industry, “Meet this standard or stop selling cars in the US” (I know there’s a lot more legislative and legal push and pull than that, but it’s at least possible). And such a standard might be adopted globally, either because it’s a good idea so it spreads, or because the US strong-arms other countries into following suit.
Even if I trusted the US Department of Education to fill in all of the blanks for an LWM, I don’t know that they’d have the same leverage to get it adopted. I doubt that the DofE has enough sway to get it adopted even across all of the educational institutions. Who would want that fight, for such a nebulous pay-off? And even if it could be successfully inflicted on educational institutions (which sounds negative, but that’s precisely how the institutions would see it), what about the numerous and in some cases well-funded research labs and museums that don’t fall under the DofE’s purview? And that’s just in the US. The culture of higher education and scholarship varies a lot among countries. Which may be why the one-size-fits-all solutions suck – I am starting to wonder if a metric needs to be broken, to be globally applicable.
The problem here is that the user base is so diverse that the only way metrics get adopted is voluntarily. So the challenge for any LWM is to be:
- Better than existing metrics – this is the easy part – and,
- Simple enough to be both easily grasped, and applied with minimal effort. In Malcolm Gladwell Tipping Point terms, it needs to be “sticky”. Although a better adjective for passage through the intestines of academia might be “smooth” – that is, having no rough edges, like exponents or overtly subjective decisions*, that would cause it to snag.
* Calculating an impact factor involves plenty of subjective decisions, but it has the advantages that (a) the users can pretend otherwise, because (b) ISI does the ‘work’ for them.
At least from my point of view, the LWM as Mike has conceived it is awesome and possibly unimprovable on the first point (in that practically any other metric could be seen as a degenerate case of the LWM), but dismal and possibly pessimal on the second one, in that it requires mounds of subjective decision-making to work at all. You can’t even get a default number and then iteratively improve it without investing heavily in advance.
An interesting thought experiment would be to approach the problem from the other side: invent as many new simple metrics as possible, and then see if any of them offer advantages over the existing ones. Although I have a feeling that people are already working on that, and have been for some time.
Simple, broken metrics like impact factor are the prions of scholarship. Yes, viruses are more versatile and cells more versatile still, by orders of magnitude, but compared to prions, cells take an awesome amount of effort to build and maintain. If you just want to infect someone and you don’t care how, prions are very hard to beat. And they’re so subtle in their machinations that we only became aware of them comparatively recently – much like the emerging problems with “classical” (e.g., non-alt) metrics.
I’d love to be wrong about all of this. I proposed the strongest criticism of the LWM I could think of, in hopes that someone would come along and tear it down. Please start swinging.
January 29, 2016
You’ll remember that in the last installment (before Matt got distracted and wrote about archosaur urine), I proposed a general schema for aggregating scores in several metrics, terming the result an LWM or Less Wrong Metric. Given a set of n metrics that we have scores for, we introduce a set of n exponents ei which determine how we scale each kind of score as it increases, and a set of n factors ki which determine how heavily we weight each scaled score. Then we sum the scaled results:
LWM = k1·x1e1 + k2·x2e2 + … + kn·xnen
“That’s all very well”, you may ask, “But how do we choose the parameters?”
Here’s what I proposed in the paper:
One approach would be to start with subjective assessments of the scores of a body of researchers – perhaps derived from the faculty of a university confidentially assessing each other. Given a good-sized set of such assessments, together with the known values of the metrics x1, x2 … xn for each researcher, techniques such as simulated annealing can be used to derive the values of the parameters k1, k2 … kn and e1, e2 … en that yield an LWM formula best matching the subjective assessments.
Where the results of such an exercise yield a formula whose results seem subjectively wrong, this might flag a need to add new metrics to the LWM formula: for example, a researcher might be more highly regarded than her LWM score indicates because of her fine record of supervising doctoral students who go on to do well, indicating that some measure of this quality should be included in the LWM calculation.
I think as a general approach that is OK: start with a corpus of well understood researchers, or papers, whose value we’ve already judged a priori by some means; then pick the parameters that best approximate that judgement; and let those parameters control future automated judgements.
The problem, really, is how we make that initial judgement. In the scenario I originally proposed, where say the 50 members of a department each assign a confidential numeric score to all the others, you can rely to some degree on the wisdom of crowds to give a reasonable judgement. But I don’t know how politically difficult it would be to conduct such an exercise. Even if the individual scorers were anonymised, the person collating the data would know the total scores awarded to each person, and it’s not hard to imagine that data being abused. In fact, it’s hard to imagine it not being abused.
In other situations, the value of the subjective judgement may be close to zero anyway. Suppose we wanted to come up with an LWM that indicates how good a given piece of research is. We choose LWM parameters based on the scores that a panel of experts assign to a corpus of existing papers, and derive our parameters from that. But we know that experts are really bad at assessing the quality of research. So what would our carefully parameterised LWM be approximating? Only the flawed judgement of flawed experts.
Perhaps this points to an even more fundamental problem: do we even know what “good research” looks like?
It’s a serious question. We all know that “research published in high-Impact Factor journals” is not the same thing as good research. We know that “research with a lot of citations” is not the same thing as good research. For that matter, “research that results in a medical breakthrough” is not necessarily the same thing as good research. As the new paper points out:
If two researchers run equally replicable tests of similar rigour and statistical power on two sets of compounds, but one of them happens to have in her batch a compound that turns out to have useful properties, should her work be credited more highly than the similar work of her colleague?
What, then? Are we left only with completely objective measurements, such as statistical power, adherance to the COPE code of conduct, open-access status, or indeed correctness of spelling?
If we accept that (and I am not arguing that we should, at least not yet), then I suppose we don’t even need an LWM for research papers. We can just count these objective measures and call it done.
I really don’t know what my conclusions are here. Can anyone help me out?
The Less Wrong Metric (LWM): towards a not wholly inadequate way of quantifying the value of research
January 26, 2016
I said last time that my new paper on Better ways to evaluate research and researchers proposes a family of Less Wrong Metrics, or LWMs for short, which I think would at least be an improvement on the present ubiquitous use of impact factors and H-indexes.
What is an LWM? Let me quote the paper:
The Altmetrics Manifesto envisages no single replacement for any of the metrics presently in use, but instead a palette of different metrics laid out together. Administrators are invited to consider all of them in concert. For example, in evaluating a researcher for tenure, one might consider H-index alongside other metrics such as number of trials registered, number of manuscripts handled as an editor, number of peer-reviews submitted, total hit-count of posts on academic blogs, number of Twitter followers and Facebook friends, invited conference presentations, and potentially many other dimensions.
In practice, it may be inevitable that overworked administrators will seek the simplicity of a single metric that summarises all of these.
This is a key problem of the world we actually live in. We often bemoan that fact that people evaluating research will apparently do almost anything than actually read the research. (To paraphrase Dave Barry, these are important, busy people who can’t afford to fritter away their time in competently and diligently doing their job.) There may be good reasons for this; there may only be bad reasons. But what we know for sure is that, for good reasons or bad, administrators often do want a single number. They want it so badly that they will seize on the first number that comes their way, even if it’s as horribly flawed as an impact factor or an H-index.
What to do? There are two options. One is the change the way these overworked administrators function, to force them to read papers and consider a broad range of metrics — in other words, to change human nature. Yeah, it might work. But it’s not where the smart money is.
So perhaps the way to go is to give these people a better single number. A less wrong metric. An LWM.
Here’s what I propose in the paper.
In practice, it may be inevitable that overworked administrators will seek the simplicity of a single metric that summarises all of these. Given a range of metrics x1, x2 … xn, there will be a temptation to simply add them all up to yield a “super-metric”, x1 + x2 + … + xn. Such a simply derived value will certainly be misleading: no-one would want a candidate with 5,000 Twitter followers and no publications to appear a hundred times stronger than one with an H-index of 50 and no Twitter account.
A first step towards refinement, then, would weight each of the individual metrics using a set of constant parameters k1, k2 … kn to be determined by judgement and experiment. This yields another metric, k1·x1 + k2·x2 + … + kn·xn. It allows the down-weighting of less important metrics and the up-weighting of more important ones.
However, even with well-chosen ki parameters, this better metric has problems. Is it really a hundred times as good to have 10,000 Twitter followers than 100? Perhaps we might decide that it’s only ten times as good – that the value of a Twitter following scales with the square root of the count. Conversely, in some contexts at least, an H-index of 40 might be more than twice as good as one of 20. In a search for a candidate for a senior role, one might decide that the value of an H-index scales with the square of the value; or perhaps it scales somewhere between linearly and quadratically – with H-index1.5, say. So for full generality, the calculation of the “Less Wrong Metric”, or LWM for short, would be configured by two sets of parameters: factors k1, k2 … kn, and exponents e1, e2 … en. Then the formula would be:
LWM = k1·x1e1 + k2·x2e2 + … + kn·xnen
So that’s the idea of the LWM — and you can see now why I refer to this as a family of metrics. Given n metrics that you’re interested in, you pick 2n parameters to combine them with, and get a number that to some degree measures what you care about.