I got an email this morning from Jim Kirkland, announcing:

All of the lectures (with permission to be filmed) will be available on the NHMU YouTube channel. I just wrapped the edit of the 6th video which should be available later today. However, 5 of the lectures are now edited and already available for viewing. They can be found here.

And by the time I read that message, the sixth talk had appeared!

Each talk is 20-25 minutes long, so there’s a good two and a quarter hours of solid but accessible science here, freely available to anyone who wants to watch them. Here, to get you started, is long-time friend of SV-POW!, Randy Irmis, on Discovering Dinosaur Origins in Utah:

It’s great that the DinoFest people are doing this. In 2017, it should really be the default — and yet I can’t think of a single vertebrate palaeo conference that has done this before. (Did I miss some? Links, please!)

I know it’s one more thing for conference organisers to have to think about (or, more optimistically, one more thing for them to delegate). but I hope we’ll be seeing a lot more of it!

Back in February last year, I had the privilege of giving one of the talks in the University of Manchester’s PGCert course “Open Knowledge in Higher Education“. I took the subject “Should science always be open?”

My plan was to give an extended version of a talk I’d given previously at ESOF 2014. But the sessions before mine raised all sorts of issues about copyright, and its effect on scholarly communication and the progress of science, and so I found myself veering off piste. The first eight and a half minutes are as planned; from there, I go off on an extended tangent. Well. See what you think.

The money quote (starting at 12m10s): “What is copyright? It’s a machine for preventing the creation of wealth.”

I have before me the reviews for a submission of mine, and the handling editor has provided an additional stipulation:

Authority and date should be provided for each species-level taxon at first mention. Please ensure that the nominal authority is also included in the reference list.

In other words, the first time I mention Diplodocus, I should say “Diplodocus Marsh 1878″; and I should add the corresponding reference to my bibliography.

Marsh (1878: plate VIII in part). The only illustration of Diplodocus material in the paper that named the genus.

Marsh (1878: plate VIII in part). The only illustration of Diplodocus material in the paper that named the genus.

What do we think about this?

I used to do this religiously in my early papers, just because it was the done thing. But then I started to think about it. To my mind, it used to make a certain amount of sense 30 years ago. But surely in 2016, if anyone wants to know about the taxonomic history of Diplodocus, they’re going to go straight to Wikipedia?

I’m also not sure what the value is in providing the minimal taxonomic-authority information rather then, say, morphological information. Anyone who wants to know what Diplodocus is would be much better to go to Hatcher 1901, so wouldn’t we serve readers better if we referred to “Diplodocus (Hatcher 1901)”

Now that I come to think of it, I included “Giving the taxonomic authority after first use of each formal name” in my list of
Idiot things that we we do in our papers out of sheer habit three and a half years ago.

Should I just shrug and do this pointless busywork to satisfy the handling editor? Or should I simply refuse to waste my time adding information that will be of no use to anyone?


  • Hatcher, Jonathan B. 1901. Diplodocus (Marsh): its osteology, taxonomy and probable habits, with a restoration of the skeleton. Memoirs of the Carnegie Museum 1:1-63 and plates I-XIII.
  • Marsh, O. C. 1878. Principal characters of American Jurassic dinosaurs, Part I. American Journal of Science, series 3 16:411-416.


As explained in careful detail over at Stupid Patent of the Month, Elsevier has applied for, and been granted, a patent for online peer-review. The special sauce that persuaded the US Patent Office that this is a new invention is cascading peer review — an idea so obvious and so well-established that even The Scholarly Kitchen was writing about it as a commonplace in 2010.

Apparently this is from the actual patent. I can't verify that at the moment, as the site hosting it seems to be down.

Apparently this is from the actual patent. I can’t verify that at the moment, as the site hosting it seems to be down.

Well. What can this mean?

A cynic might think that this is the first step an untrustworthy company would take preparatory to filing a lot of time-wasting and resource-sapping nuisance lawsuits on its smaller, faster-moving competitors. They certainly have previous in the courts: remember that they have brought legal action their own customers as well as threatening Academia.edu and of course trying to take Sci-Hub down.

Elsevier representatives are talking this down: Tom Reller has tweeted “There is no need for concern regarding the patent. It’s simply meant to protect our own proprietary waterfall system from being copied” — which would be fine, had their proprietary waterfall system not been itself copied from the ample prior art. Similarly, Alicia Wise has said on a public mailing list “People appear to be suggesting that we patented online peer review in an attempt to own it.  No, we just patented our own novel systems.” Well. Let’s hope.

But Cathy Wojewodzki, on the same list, asked the key question:

I guess our real question is Why did you patent this? What is it you hope to market or control?

We await a meaningful answer.

Long time readers may remember the stupid contortions I had to go through in order to avoid giving the Geological Society copyright in my 2010 paper about the history of sauropod research, and how the Geol. Soc. nevertheless included a fraudulent claim of copyright ownership in the published version.

The way I left it back in 2010, my wife, Fiona, was the copyright holder. I should have fixed this a while back, but I now note for the record that she has this morning assigned copyright back to me:

From: Fiona Taylor <REDACTED>
To: Mike Taylor <mike@indexdata.com>
Date: 15 August 2016 at 11:03
Subject: Transfer

I, Fiona J. Taylor of Oakleigh Farm House, Crooked End, Ruardean, GL17 9XF, England, hereby transfer to you, Michael P. Taylor of Oakleigh Farm House, Crooked End, Ruardean, GL17 9XF, England, the copyright of your article “Sauropod dinosaur research: a historical review”. This email constitutes a legally binding transfer.

Sorry to post something so boring, after so long a gap (nearly a month!) Hopefully we’ll have some more interesting things to say — and some time to say them — soon!

As a long-standing proponent of preprints, it bothers me that of all PeerJ’s preprints, by far the one that has had the most attention is Terrell et al. (2016)’s Gender bias in open source: Pull request acceptance of women versus men. Not helped by a misleading abstract, we’ve been getting headlines like these:

But in fact, as Kate Jeffrey points out in a comment on the preprint (emphasis added):

The study is nice but the data presentation, interpretation and discussion are very misleading. The introduction primes a clear expectation that women will be discriminated against while the data of course show the opposite. After a very large amount of data trawling, guided by a clear bias, you found a very small effect when the subjects were divided in two (insiders vs outsiders) and then in two again (gendered vs non-gendered). These manipulations (which some might call “p-hacking”) were not statistically compensated for. Furthermore, you present the fall in acceptance for women who are identified by gender, but don’t note that men who were identified also had a lower acceptance rate. In fact, the difference between men and women, which you have visually amplified by starting your y-axis at 60% (an egregious practice) is minuscule. The prominence given to this non-effect in the abstract, and the way this imposes an interpretation on the “gender bias” in your title, is therefore unwarranted.

And James Best, in another comment, explains:

Your most statistically significant results seem to be that […] reporting gender has a large negative effect on acceptance for all outsiders, male and female. These two main results should be in the abstract. In your abstract you really should not be making strong claims about this paper showing bias against women because it doesn’t. For the inside group it looks like the bias moderately favours women. For the outside group the biggest effect is the drop for both genders. You should hence be stating that it is difficult to understand the implications for bias in the outside group because it appears the main bias is against people with any gender vs people who are gender neutral.

Here is the key graph from the paper:

TerrellEtAl2016-fig5(The legends within the figure are tiny: on the Y-axes, they both read “acceptance rate”; and along the X-axis, from left to right, they read “Gender-Neutral”, “Gendered” and then again “Gender-Neutral”, “Gendered”.)

So James Best’s analysis is correct: the real finding of the study is a truly bizarre one, that disclosing your gender whatever that gender is reduces the chance of code being accepted. For “insiders” (members of the project team), the effect is slightly stronger for men; for “outsiders” it is rather stronger for women. (Note by the way that all the differences are much less than they appear, because the Y-axis runs from 60% to 90%, not 0% to 100%.)

Why didn’t the authors report this truly fascinating finding in their abstract? It’s difficult to know, but it’s hard not to at least wonder whether they felt that the story they told would get more attention than their actual findings — a feeling that has certainly been confirmed by sensationalist stories like Sexism is rampant among programmers on GitHub, researchers find (Yahoo Finance).

I can’t help but think of Alan Sokal’s conclusion on why his obviously fake paper in the physics of gender studies was accepted by Social Text:it flattered the editors’ ideological preconceptions“. It saddens me to think that there are people out there who actively want to believe that women are discriminated against, even in areas where the data says they are not. Folks, let’s not invent bad news.

Would this study have been published in its present form?

This is the big question. As noted, I am a big fan of preprints. But I think that the misleading reporting in the gender-bias paper would not make it through peer-review — as the many critical comments on the preprint certainly suggest. Had this paper taken a conventional route to publication, with pre-publication review, then I doubt we would now be seeing the present sequence of misleading headlines in respected venues, and the flood of gleeful “see-I-told-you” tweets.

(And what do those headlines and tweets achieve? One thing I am quite sure they will not do is encourage more women to start coding and contributing to open-source projects. Quite the opposite: any women taking these headlines at face value will surely be discouraged.)

So in this case, I think the fact that the study in its present form appeared on such an official-looking venue as PeerJ Preprints has contributed to the avalanche of unfortunate reporting. I don’t quite know what to do with that observation.

What’s for sure is that no-one comes out of this as winners: not GitHub, whose reputation has been wrongly slandered; not the authors, whose reporting has been shown to be misleading; not the media outlets who have leapt uncritically on a sensational story; not the tweeters who have spread alarm and despondancy; not PeerJ Preprints, which has unwittingly lent a veneer of authority to this car-crash. And most of all, not the women who will now be discouraged from contributing to open-source projects.


Thirteen years ago, Kenneth Adelman photographed part of the California coastline from the air. His images were published as part of a set of 12,000 in the California Coastal Records Project. One of those photos showed the Malibu home of the singer Barbra Streisand.

In one of the most ill-considered moves in history, Streisand sued Adelman for violation of privacy. As a direct result, the photo — which had at that point been downloaded four times — was downloaded a further 420,000 times from the CCRP web-site alone. Meanwhile, the photo was republished all over the Web and elsewhere, and has almost certainly now been seen by tens of millions of people.

Oh, look! There it is again!

Oh, look! There it is again!

Last year, the tiny special-interest academic-paper search-engine Sci-Hub was trundling along in the shadows, unnoticed by almost everyone.

In one of the most ill-considered moves in history, Elsevier sued Sci-Hub for lost revenue. As a direct result, Sci-Hub is now getting publicity in venues like the International Business Times, Russia Today, The Atlantic, Science Alert and more. It’s hard to imagine any other way Sci-Hub could have reached this many people this quickly.


I’m not discussing at the moment whether what Sci-Hub is doing is right or wrong. What’s certainly true is (A) it’s doing it, and (B) many, many people now know about it.

It’s going to be hard to Elsevier to get this genie back into the bottle. They’ve already shut down the original sci-hub.com domain, only to find it immediately popping up again as sci-hub.io. That’s going to be a much harder domain for them to shut down, and even if they manage it, the Sci-Hub operators will not find it difficult to get another one. (They may already have several more lined up and ready to deploy, for all I know.)

So you’d think the last thing they’d want to do is tell the world all about it.