Christine Argot of the MNHN, Paris, drew our attention to this wonderful old photo (from here, original caption reproduced below):

© Paleontological Museum, Moscow In the beginning of XX century, the Severo-Dvinskaya gallery (named after prof. Amalitsky) became the gold basis of the exhibition hall of ancient life in the Geological Museum of St-Petersburg. The museum hall was completed with a cast of the Diplodicus carnegii skeleton presented by E.Carnegy fund in 1913, at the 300-th anniversary of the Romanovs dynasty.

© Paleontological Museum, Moscow
In the beginning of XX century, the Severo-Dvinskaya gallery (named after prof. Amalitsky) became the gold basis of the exhibition hall of ancient life in the Geological Museum of St-Petersburg. The museum hall was completed with a cast of the Diplodicus carnegii skeleton presented by E.Carnegy fund in 1913, at the 300-th anniversary of the Romanovs dynasty.

I found a different version of what seems to be the same photo (greyscaled, lower resolution, but showing more of the surrounding area) here:

1932-jyosqjdogynshijh rp cpodtegqnhjimtgalwjo

What we have here is a truly bizarre mount of Diplodocus — almost certainly one of the casts of the D. carnegii holotype CM 84 — with perfectly erect, parasagittal hind-limbs, but bizarrely everted elbows.

There are a few mysteries here.

First, where and when was this photo taken? Christine’s email described this as a “picture of a Diplodocus cast taken in St. Petersburg around 1920″, and the caption above seems to confirm that location; but then why is it copyright the Paleontological Museum, Moscow? Since the web-site in question is for a Swedish museum, it’s not forthcoming.

The second photo is from the web-site of the Borisyak Paleontological Institute in Moscow, but that site unfortunately provides no caption. The juxtaposition with two more modern Diplodocus-skeleton photos that are from its own gallery perhaps suggest that the modern mount shown in the more recent photographs is a re-pose of the old mount in the black-and white photo. If so, that might mean that the skeleton was actually in Moscow all along rather than St. Petersburg, or perhaps that it was moved from St. Petersburg to Moscow and  remounted there.

Does anyone know? Has anyone out there visited the St. Petersburg museum recently and seen whether there is still a Diplodocus skeleton there? If so, is it still mounted in this bizarre way? Better yet, do you have photos?

Tornier's sprawling, disarticulated reconstruction of Diplodocus, modified from Tornier (1909, plate II).

Tornier’s sprawling, disarticulated reconstruction of Diplodocus, modified from Tornier (1909, plate II).

The second question of course is why was this posture used? This pose makes no sense for several reasons — one of which is that even if Diplodocus could attain this posture it would only serve to leave the forefeet under the torso in the same position as erect forelimbs would have them. The pose only makes any kind of sense at all if you imagine the animal lowering its torso to drink; but given that it had a flexible six-meter-long neck, that hardly seems necessary.

Of course Diplodocus does have a history of odd postures: because of the completeness of the D. carnegii holotype, it became the subject of the Sauropod Posture Wars between Tornier, Hay and Holland in the early 20th Century. Both Tornier (1909) and Hay (1910) favoured a sprawling posture like that of lizards (see images above and below), and were soundly refuted by Holland

The form and attitudes of Diplodocus. Hay (1910: plate 1)

The form and attitudes of Diplodocus. Hay (1910: plate 1)

But the Tornier and Hay postures bear no relation to that of the mounted skeleton in the photographs above: they position the forefeet far lateral to the torso, and affect the hindlimbs as well as the forelimbs. So whatever the Russian mount was doing, I don’t think it can have been intended as a representation of the Tornier/Hay hypothesis.

But it gets even weirder. Christine tells me that “I’m aware of […] the tests that Holland performed on the Russian cast to get rid of the hypothesis suggesting a potential lizard-like posture. So I think that he would have never allowed such a posture for one of the casts he mounted himself.” Now I didn’t know that Holland had executed the mounting of this cast. Assuming that’s right, it makes it even more inexplicable that he would have allowed such a posture.

Or did he?

Christine’s email finishes by asking: “What do you think? do you think that somebody could have come behind Holland to change the position? do you know any colleague or publication who could mention this peculiar cast and comment its posture?”

Can anyone help?

References

  • Hay, Oliver. P. 1910. On the manner of locomotion of the dinosaurs, especially Diplodocus, with remarks on the origin of birds. Proceedings of the Washington Academy of Sciences 12(1):1-25.
  • Holland, W. J. 1910. A review of some recent criticisms of the restorations of sauropod dinosaurs existing in the museums of the United States, with special reference to that of Diplodocus carnegiei in the Carnegie museum. American Naturalist 44:259-283.
  • Nieuwland, Ilja. 2010. The colossal stranger. Andrew Carnegie and Diplodocus intrude European Culture, 1904–1912. Endeavour 34(2):61-68.
  • Tornier, Gustav. 1909. Wie war der Diplodocus carnegii wirklich gebaut? Sitzungsbericht der Gesellschaft naturforschender Freunde zu Berlin 4:193– 209.
Advertisements

As we all know, University libraries have to pay expensive subscription fees to scholarly publishers such as Elsevier, Springer, Wiley and Informa, so that their researchers can read articles written by their colleagues and donated to those publishers. Controversially (and maybe illegally), when negotiating contracts with libraries, publishers often insist on confidentiality clauses — so that librarians are not allowed to disclose how much they are paying. The result is an opaque market with no downward pressure on prices, hence the current outrageously high prices, which are rising much more quickly than inflation even as publishers’ costs shrink due to the transition to electronic publishing.

On Thursday 11 April 2013, Oxford University hosted a conference called Rigour and Openness in 21st Century Science. The evening event was a debate on the subject Evolution or Revolution In Science Communication. During this debate, Stephen Curry of Imperial College noted that his librarian isn’t allowed to tell him how much they pay for Elsevier journals. This is the response of David Tempest, Elsevier’s Deputy Director of Universal Sustainable Research Access.

Heres’ a transcript

Curry [in reference to the previous answer]: I’m glad David Tempest is so interested in librarians being able to make costs transparent to their users, because at my university, Imperial College, my chief librarian can not tell me how much she pays for Elsevier journals because she’s bound by a confidentiality clause. Would you like to address that?

[Loud applause for the question]

Tempest: Well, indeed there are confidentiality clauses inherent in the system, in our Freedom Collections. The Freedom Collections do give a lot of choice and there is a lot of discount in there to the librarians. And the use, and the cost per use has been dropping dramatically, year on year. And so we have to ensure that, in order to have fair competition between different countries, that we have this level of confidentiality to make that work. Otherwise everybody would drive down, drive down, drive drive drive, and that would mean that …

[The last  part is drowned in the laughter of the audience.]

So there you have it: confidentiality clauses exist because otherwise everybody would drive down prices. And we can’t have that, can we?

(Is this extracted segment of video unfairly misrepresenting Tempest? No. To see that for yourself, I highly recommend that you watch the video of the whole debate. It’s long — nearly two hours — but well worth the time. The section I used here starts at 1:09:50.)

It’s now widely understood among researchers that the impact factor (IF) is a statistically illiterate measure of the quality of a paper. Unfortunately, it’s not yet universally understood among administrators, who in many places continue to judge authors on the impact factors of the journals they publish in. They presumably do this on the assumption that impact factor is a proxy for, or predictor of, citation count, which is turn is assumed to correlate with influence.

As shown by Lozano et al. (2012), the correlation between IF and citations is in fact very weak — r2 is about 0.2 — and has been progressively weakening since the dawn of the Internet era and the consequent decoupling of papers from the physical journal that they appear in. This is a counter-intuitive finding: given that the impact factor is calculated from citation counts you’d expect it to correlate much more strongly. But the enormous skew of citation rates towards a few big winners renders the average used by the IF meaningless.

To bring this home, I plotted my own personal impact-factor/citation-count graph. I used Google Scholar’s citation counts of my articles, which recognises 17 of my papers; then I looked up the impact factors of the venues they appeared in, plotted citation count against impact factor, and calculated a best-fit line through my data-points. Here’s the result (taken from a slide in my Berlin 11 satellite conference talk):

berlin11-satellite-taylor-what-we-can-do--impact-factor-graph

I was delighted to see that the regression slope is actually negative: in my case at least, the higher the impact factor of the venue I publish in, the fewer citations I get.

There are a few things worth unpacking on that graph.

First, note the proud cluster on the left margin: publications in venues with impact factor zero (i.e. no impact factor at all). These include papers in new journals like PeerJ, in perfectly respectable established journals like PaleoBios, edited-volume chapters, papers in conference proceedings, and an arXiv preprint.

My most-cited paper, by some distance, is Head and neck posture in sauropod dinosaurs inferred from extant animals (Taylor et al. 2009, a collaboration between all three SV-POW!sketeers). That appeared in Acta Palaeontologia Polonica, a very well-respected journal in the palaeontology community but which has a modest impact factor of 1.58.

My next most-cited paper, the Brachiosaurus revision (Taylor 2009), is in the Journal of Vertebrate Palaeontology — unquestionably the flagship journal of our discipline, despite its also unspectacular impact factor of 2.21. (For what it’s worth, I seem to recall it was about half that when my paper came out.)

In fact, none of my publications have appeared in venues with an impact factor greater than 2.21, with one trifling exception. That is what Andy Farke, Matt and I ironically refer to as our Nature monograph (Farke et al. 2009). It’s a 250-word letter to the editor on the subject of the Open Dinosaur Project. (It’ a subject that we now find profoundly embarrassing given how dreadfully slowly the project has progressed.)

Google Scholar says that our Nature note has been cited just once. But the truth is even better: that one citation is in fact from an in-prep manuscript that Google has dug up prematurely — one that we ourselves put on Google Docs, as part of the slooow progress of the Open Dinosaur Project. Remove that, and our Nature note has been cited exactly zero times. I am very proud of that record, and will try to preserve it by persuading Andy and Matt to remove the citation from the in-prep paper before we submit. (And please, folks: don’t spoil my record by citing it in your own work!)

What does all this mean? Admittedly, not much. It’s anecdote rather than data, and I’m posting it more because it amuses me than because it’s particularly persuasive. In fact if you remove the anomalous data point that is our Nature monograph, the slope becomes positive — although it’s basically meaningless, given that all my publications cluster in the 0–2.21 range. But then that’s the point: pretty much any data based on impact factors is meaningless.

References

 

In what is by now a much-reported story, @DNLee, who writes the Urban Scientist blog on the Scientific American blog network, was invited by Biology Online to write a guest-post for their blog. On being told this was a non-paying gig, she politely declined: “Thank you very much for your reply. But I will have to decline your offer. Have a great day.” To which Biology Online’s blog editor Ofek replied “Are you an urban scientist or an urban whore?”

So far, so horrible. I had never heard of Biology Online before this, and won’t be seeking them out. You can add my name of the long list of people who certainly won’t be writing free content for them.

It’s what happened next that bothers me.

DNLee posted on her blog about what happened — rather a restrained post, which took the opportunity to discuss the wider implications rather than cursing out the perpetrator.

And Scientific American deleted the post.

They just deleted it.

This bothers me much more than the original incident, because I had no idea who Biology Online are, but thought I knew what Scientific American was. Looks like I didn’t. All I know for sure about them now is that they’re a company that accepts advertising revenue from Biology Online. Just saying.

Not a word was said to DNLee about this censorship by the people running the network. The post just vanished, bam. If you follow the link, it currently says “You have reached this page due to an error”. Yes. An error on the part of the blog-network management.

(This, by the way, is one of the reasons I don’t expect Sauropod Vertebra Picture of the Week ever to join one of these networks. I will not tolerate someone else making a decision to take down one of my posts.)

What makes this much worse is that Scientific American‘s Editor in Chief Mariette DiChristina has flat-out lied about this incident at least once. First she tweeted “@sciam is a publication for discovering science. The post was not appropriate for this area & was therefore removed.” Then after a day of silence, she blogged “we could not quickly verify the facts of the blog post and consequently for legal reasons we had to remove the post“.

So which was it, SciAm? Did you censor the post because it was off-topic? Or because of a perceived legal threat? Or, since we know at least one of these mutually contradictory claims isn’t true, maybe neither of them is, and you removed it avoid inconveniencing a sponsor?

DiChristina’s blog-post is a classic nonpoplogy. It says nothing about the original slur that gave rise to all this, and it doesn’t apologise to DNLee for censoring her perfectly reasonable blog-post. What it does do is blame the victim by implying that DNLee’s post is somehow illegal. (You can judge for yourself whether it is by reading one of the many mirrors.)

Then there’s this: “for legal reasons we had to remove the post”. What legal reasons? When did the SciAm legal team get involved in this? (Did they at all? I am sceptical.) Have you actually been threatened by Biology Online? (Again, I have my doubts.) Even if a threat has been received, it’s at best cowardly of SciAm to cave so immediately, and grotesquely unprofessional not even to bother notifying DNLee.

So SciAm are digging themselves deeper and deeper into this hole. Even their usually prolific and reliable blog editor @BoraZ has gone uncharacteristically quiet — I can only hope because he, too, is being silenced, rather than because he’s complicit.

There are only two ways for the SciAm blogging network to get out of this with some shreds of their reputation intact. They need to either show clearly that DNLee was lying about Biology Online, in which case they would merely have mismanaged this incident; or they need to reinstate her post and apologise properly. “Properly” means “We screwed up because of our cowardice, please forgive us”, not “We’re sorry if some people were offended by our decision to do this thing that we’re going to keep claiming was OK”. Because it wasn’t.

Right then, SciAm. Where now?

Suppose, hypothetically, that you worked for an organisation whose nominal goal is the advancement of science, but which has mutated into a highly profitable subscription-based publisher. And suppose you wanted to construct a study that showed the alternative — open-access publishing — is inferior.

What would you do?

You might decide that a good way to test publishers is by sending them an obviously flawed paper and seeing whether their peer-review weeds it out.

But you wouldn’t want to risk showing up subscription publishers. So the first thing you’d do is decide up front not to send your flawed paper to any subscription journals. You might justify this by saying something like “the turnaround time for traditional journals is usually months and sometimes more than a year. How could I ever pull off a representative sample?“.

Next, you’d need to choose a set of open-access journals to send it to. At this point, you would carefully avoid consulting the membership list of the Open Access Scholarly Publishers Association, since that list has specific criteria and members have to adhere to a code of conduct. You don’t want the good open-access journals — they won’t give you the result you want.

Instead, you would draw your list of publishers from the much broader Directory of Open Access Journals, since that started out as a catalogue rather than a whitelist. (That’s changing, and journals are now being cut from the list faster than they’re being added, but lots of old entries are still in place.)

Then, to help remove many of the publishers that are in the game only to advance research, you’d trim out all the journals that don’t levy an article processing charge.

But the resulting list might still have an inconveniently high proportion of quality journals. So you would bring down the quality by adding in known-bad publishers from Beall’s list of predatory open-access publishers.

Having established your sample, you’d then send the fake papers, wait for the journals’ responses, and gather your results.

To make sure you get a good, impressive result that will have a lot of “impact”, you might find it necessary to discard some inconvenient data points, omitting from the results some open-access journals that rejected the paper.

Now you have your results, it’s time to spin them. Use sweeping, unsupported generalisations like “Most of the players are murky. The identity and location of the journals’ editors, as well as the financial workings of their publishers, are often purposefully obscured.”

Suppose you have a quote from the scientist whose experiences triggered the whole project, and he said something inconvenient like “If [you] had targeted traditional, subscription-based journals, I strongly suspect you would get the same result”. Just rewrite it to say “if you had targeted the bottom tier of traditional, subscription-based journals”.

Now you have the results you want — but how will you ever get through through peer-review, when your bias is so obvious? Simple: don’t submit your article for peer-review at all. Classify it as journalism, so you don’t need to go through review, nor to get ethical approval for the enormous amount of editors’ and reviewers’ time you’ve wasted — but publish it in a journal that’s known internationally for peer-reviewed research, so that uncritical journalists will leap to your favoured conclusion.

Last but not least, write a press-release that casts the whole study as being about the “Wild West” of Open-Access Publishing.

Everyone reading this will, I am sure, have recognised that I’m talking about  John Bohannon’s “sting operation” in Science. Bohannon has a Ph.D. in molecular biology from Oxford University, so we would hope he’d know what actual science looks like, and that this study is not it.

Of course, the problem is that he does know what science looks like, and he’s made the “sting” operation look like it. It has that sciencey quality. It discusses methods. It has supplementary information. It talks a lot about peer-review, that staple of science. But none of that makes it science. It’s a maze of preordained outcomes, multiple levels of biased selection, cherry-picked data and spin-ridden conclusions. What it shows is: predatory journals are predatory. That’s not news.

Speculating about motives is always error-prone, of course, but it it’s hard not to think that Science‘s goal in all this was to discredit open-access publishing — just as legacy publishers have been doing ever since they realised OA was real competition. If that was their goal, it’s misfired badly. It’s Science‘s credibility that’s been compromised.

Update (9 October)

Akbar Khan points out yet more problems with Bohannon’s work: mistakes in attributing where given journals were listed, DOAJ or Beall’s list. As a result, the sample may be more, or less, biased than Bohannon reported.

 

 

 

What is an ad-hominem attack?

September 4, 2013

I recently handled the revisions on a paper that hopefully will be in press very soon. One of the review comments was “Be very careful not to make ad hominem attacks”.

I was a bit surprised to see that — I wasn’t aware that I’d made any — so I went back over the manuscript, and sure enough, there were no ad homs in there.

There was criticism, though, and I think that’s what the reviewer meant.

Folks, “ad hominem” has a specific meaning. An “ad hominem attack” doesn’t just mean criticising something strongly, it means criticising the author rather than the work. The phrase is Latin for “to the man”. Here’s a pair of examples:

  • “This paper by Wedel is terrible, because the data don’t support the conclusion” — not ad hominem.
  • “Wedel is a terrible scientist, so this paper can’t be trusted” — ad hominem.

What’s wrong with ad hominem criticism? Simply, it’s irrelevant to evaluation of the paper being reviewed. It doesn’t matter (to me as a scientist) whether Wedel strangles small defenceless animals for pleasure in his spare time; what matters is the quality of his work.

Note that ad hominems can also be positive — and they are just as useless there. Here’s another pair of examples:

  • “I recommend publication of Naish’s paper because his work is explained carefully and in detail” — not ad hominem.
  • “I recommend publication of Naish’s paper because he is a careful and detailed worker” — ad hominem.

It makes no difference whether Naish is a careful and detailed worker, or if he always buys his wife flowers on their anniversary, or even if he has a track-record of careful and detailed work. What matters is whether this paper, the one I’m reviewing, is good. That’s all.

As it happens the very first peer-review I ever received — for the paper that eventually became Taylor and Naish (2005) on diplodocoid phylogenetic nomenclature — contained a classic ad hominem, which I’ll go ahead and quote:

It seems to me perfectly reasonable to expect revisers of a major clade to have some prior experience/expertise in the group or in phylogenetic taxonomy before presenting what is intended to be the definitive phylogenetic taxonomy of that group. I do not wish to demean the capabilities of either author – certainly Naish’s “Dinosaurs of the Isle of Wight” is a praiseworthy and useful publication in my opinion – but I question whether he and Taylor can meet their own desiderata of presenting a revised nomenclature that balances elegance, consistency, and stability.

You see what’s happening here? The reviewer was not reviewing the paper, but the authors. There was no need for him or her to question whether we could meet our desiderata: he or she could just have read the manuscript and found out.

(Happy ending: that paper was rejected at the journal we first sent it to, but published at PaleoBios in revised form, and bizarrely is my equal third most-cited paper. I never saw that coming.)

Robin Osborne, professor of ancient history at King’s College, Cambridge, had an article in the Guardian yesterday entitled “Why open access makes no sense“. It was described by Peter Coles as “a spectacularly insular and arrogant argument”, by Peter Webster as an “Amazingly wrong-headed piece” and  by Glyn Moody as “easily the most arrogant & dim-witted article I’ve ever read on OA”.

Here’s my response (posted as a comment on the original article):

At a time when the world as a whole is waking up to the open-access imperative, it breaks my heart to read this fusty, elitist, reactionary piece, in which Professor Osborne ends up arguing strongly for his own irrelevance. What a tragic lack of vision, and of ambition.

There is still a discussion to be had over what routes to take to universal open access, how quickly to move, and what other collateral changes need to be made (such as changing how research is evaluated for the purposes of job-searches and promotion). But Osborne’s entitled bleat is no part of that discussion. He has opted out.

The fundamental argument for providing open access to academic research is that research that is funded by the tax-payer should be available to the tax-payer.

That is not the fundamental argument for providing open access (although it’s certainly a compelling secondary one). The fundamental argument is that the job of a researcher is to create new knowledge and understanding; and that it’s insane to then take that new knowledge and understanding and lock it up where only a tiny proportion of the population can benefit from it. That’s true whether the research is funded publicly or by a private charity.

The problem is that the two situations are quite different. In the first case [academic research], I propose both the research questions and the dataset to which I apply them. In the second [commercial research] the company commissioning the work supplies the questions.

Osborne’s position here seem to be that because he is more privileged than a commercial researcher in one respect (being allowed to choose the subject of his research) he should also be more privileged in another (being allowed to choose to restrict his results to an elite). How can such an attitude be explained? I find it quite baffling. Why would allowing researchers to choose their own subjects mean that funders would be happy to allow the results to be hidden from the world?

Publishing research is a pedagogical exercise, a way of teaching others

Yes. Which is precisely why there is no justification for withholding it from those others.

At the end of the day the paper published in a Gold open access journal becomes less widely read. […] UK scholars who are obliged to publish in Gold open access journals will end up publishing in journals that are less international and, for all that access to them is cost-free, are less accessed in fact. UK research published through Gold open access will end up being ignored.

As a simple matter of statistics, this is flatly incorrect. Open-access papers are read, and cited, significantly more than paywalled papers. The meta-analysis of Swan (2010) surveyed 31 previous studies of the open-access citation advantage, showing that 27 of them found advantages of between 45% are 600%. I did a rough-and-ready calculation on the final table of that report, averaging the citation advantages given for each of ten academic fields (using the midpoints of ranges when given), and found that on average open-access articles are cited 176% more often — that is, 2.76 times as often — as non-open.

There can be no such thing as free access to academic research. Academic research is not something to which free access is possible.

… because saying it twice makes it more true.

Like it or not, the primary beneficiary of research funding is the researcher, who has managed to deepen their understanding by working on a particular dataset.

Just supposing this strange assertion is true (which I don’t at all accept), I’m left wondering what Osborne thinks the actual purpose of his research is. On what basis does he think our taxes should pay him to investigate questions which (as he himself reminds us) he has chosen as being of interest to him? Does he honestly believe that the state owes him not just a living, but a living doing the work that he chooses on the subject that he chooses with no benefit accruing to anyone but him?

No, it won’t do. We fund research so that we can all be enriched by the new knowledge, not just an entited elite. Open access is not just an economic necessity, it’s a moral imperative.