September 14, 2016
Long-time SV-POW! readers will remember that three years ago, full of enthusiasm after speaking about Barosaurus at the Edinburgh SVPCA, Matt and I got that talk written up in double-quick time and had it published as a PeerJ Preprint in less than three weeks. Very quickly, the preprint attracted substantive, helpful reviews: three within the first 24 hours, and several more in the next few days.
This was great: it gave us the opportunity to handle those review comments and get the manuscript turned around into an already-reviewed formal journal submission in less then a month from the original talk.
So of course what we did instead was: nothing. For three years.
I can’t excuse that. I can’t even explain it. It’s not as though we’ve spent those three years churning out a torrent of other awesome papers. We’ve both just been … a bit lame.
Anyway, here’s a story that will be hauntingly familiar. A month ago, full of enthusiasm after speaking about Barosaurus at the Liverpool SVPCA, Matt and I found ourselves keen to write up that talk in double-quick time. It’s an exciting tale of new specimens, reinterpretation of an important old specimen, and a neck eight times as long as that 0f a world-record giraffe.
But it would be crazy to write the new Barosaurus paper without first having dealt with the old Barosaurus paper. So now, finally, three years on, we’ve done that. Version 2 of the preprint is now available (Taylor and Wedel 2016), incorporating all the fine suggestions of the people who reviewed the first version — and with a slightly spiffed-up title. What’s more, the new version has also been submitted for formal peer-review. (In retrospect, I can’t think why we didn’t do that when we put the first preprint up.)
A big part of the purpose of this post is to thank Emanuel Tschopp, Mark Robinson, Andy Farke, John Foster and Mickey Mortimer for their reviews back in 2013. I know it’s overdue, but they are at least all acknowledged in the new version of the manuscript.
Now we cross our fingers, and hope that the formally solicited reviews for the new version of the manuscript are as helpful and constructive as the reviews in that first round. Once those reviews are in, we should be able to move quickly and painlessly to a formally published version of this paper. (I know, I know — I shouldn’t offer such a hostage to fortune.)
Meanwhile, I will finally be working on handling the reviews of this other PeerJ submission, which I received back in October last year. Yes, I have been lax; but I am back in the saddle now.
- Taylor, Michael P., and Mathew J. Wedel. 2016. The neck of Barosaurus: longer, wider and weirder than those of Diplodocus and other diplodocines. PeerJ PrePrints 1:e67v2 doi:10.7287/peerj.preprints.67v2
That paper that says women are better coders than men but are judged on their gender? It doesn’t say that at all
February 20, 2016
As a long-standing proponent of preprints, it bothers me that of all PeerJ’s preprints, by far the one that has had the most attention is Terrell et al. (2016)’s Gender bias in open source: Pull request acceptance of women versus men. Not helped by a misleading abstract, we’ve been getting headlines like these:
- Study: Female Coders Better Than Men, But Perceived As Worse (LiveScience)
- Women accepted as better coders as long as no gender link (TechXplore)
- Women devs – want your pull requests accepted? Just don’t tell anyone you’re a girl (The Register)
But in fact, as Kate Jeffrey points out in a comment on the preprint (emphasis added):
The study is nice but the data presentation, interpretation and discussion are very misleading. The introduction primes a clear expectation that women will be discriminated against while the data of course show the opposite. After a very large amount of data trawling, guided by a clear bias, you found a very small effect when the subjects were divided in two (insiders vs outsiders) and then in two again (gendered vs non-gendered). These manipulations (which some might call “p-hacking”) were not statistically compensated for. Furthermore, you present the fall in acceptance for women who are identified by gender, but don’t note that men who were identified also had a lower acceptance rate. In fact, the difference between men and women, which you have visually amplified by starting your y-axis at 60% (an egregious practice) is minuscule. The prominence given to this non-effect in the abstract, and the way this imposes an interpretation on the “gender bias” in your title, is therefore unwarranted.
Your most statistically significant results seem to be that […] reporting gender has a large negative effect on acceptance for all outsiders, male and female. These two main results should be in the abstract. In your abstract you really should not be making strong claims about this paper showing bias against women because it doesn’t. For the inside group it looks like the bias moderately favours women. For the outside group the biggest effect is the drop for both genders. You should hence be stating that it is difficult to understand the implications for bias in the outside group because it appears the main bias is against people with any gender vs people who are gender neutral.
Here is the key graph from the paper:
(The legends within the figure are tiny: on the Y-axes, they both read “acceptance rate”; and along the X-axis, from left to right, they read “Gender-Neutral”, “Gendered” and then again “Gender-Neutral”, “Gendered”.)
So James Best’s analysis is correct: the real finding of the study is a truly bizarre one, that disclosing your gender whatever that gender is reduces the chance of code being accepted. For “insiders” (members of the project team), the effect is slightly stronger for men; for “outsiders” it is rather stronger for women. (Note by the way that all the differences are much less than they appear, because the Y-axis runs from 60% to 90%, not 0% to 100%.)
Why didn’t the authors report this truly fascinating finding in their abstract? It’s difficult to know, but it’s hard not to at least wonder whether they felt that the story they told would get more attention than their actual findings — a feeling that has certainly been confirmed by sensationalist stories like Sexism is rampant among programmers on GitHub, researchers find (Yahoo Finance).
I can’t help but think of Alan Sokal’s conclusion on why his obviously fake paper in the physics of gender studies was accepted by Social Text: “it flattered the editors’ ideological preconceptions“. It saddens me to think that there are people out there who actively want to believe that women are discriminated against, even in areas where the data says they are not. Folks, let’s not invent bad news.
Would this study have been published in its present form?
This is the big question. As noted, I am a big fan of preprints. But I think that the misleading reporting in the gender-bias paper would not make it through peer-review — as the many critical comments on the preprint certainly suggest. Had this paper taken a conventional route to publication, with pre-publication review, then I doubt we would now be seeing the present sequence of misleading headlines in respected venues, and the flood of gleeful “see-I-told-you” tweets.
(And what do those headlines and tweets achieve? One thing I am quite sure they will not do is encourage more women to start coding and contributing to open-source projects. Quite the opposite: any women taking these headlines at face value will surely be discouraged.)
So in this case, I think the fact that the study in its present form appeared on such an official-looking venue as PeerJ Preprints has contributed to the avalanche of unfortunate reporting. I don’t quite know what to do with that observation.
What’s for sure is that no-one comes out of this as winners: not GitHub, whose reputation has been wrongly slandered; not the authors, whose reporting has been shown to be misleading; not the media outlets who have leapt uncritically on a sensational story; not the tweeters who have spread alarm and despondancy; not PeerJ Preprints, which has unwittingly lent a veneer of authority to this car-crash. And most of all, not the women who will now be discouraged from contributing to open-source projects.
November 19, 2015
I got back on Tuesday from OpenCon 2015 — the most astonishing conference on open scholarship. Logistically, it works very different from most conferences: students have their expenses paid, but established scholars have to pay a registration fee and cover their own expenses. That inversion of how things are usually done captures much of what’s unique about OpenCon: its focus on the next generation is laser-sharp.
They say you should never meet your heroes, but OpenCon demonstrated that that’s not always a good rule. Here I am with Erin McKiernan — the epitome of a fully open early-career researcher — and Mike Eisen, who needs no introduction:
(This photo was supposed to be Erin and me posing in our PeerJ T-shirts, but Mike crashed it with his PLOS shirt. Thanks to Geoff Bilder for taking the photo.)
It was striking the opening session, on Saturday morning, consisted of consecutive keynotes from Mike and then Erin. Both are now free to watch, and I can’t overstate how highly I recommend them. Seriously, make time. Next time you’re going to watch a movie, skip it and watch Mike and Erin instead.
Much of Mike’s talk was history: how he and others first became convinced of the importance of openness, how E-biomed nearly happened and then didn’t, how PLOS started with a declaration and became a publisher, and so on. What’s striking about this is just how much brutal opposition and painful discouragement Mike and his colleagues had to go through to get us to where we are now. The E-biomed proposal that would have freed all biomedical papers was opposed powerfully by publishers (big surprise, huh?) and eventually watered down into PubMed Central. The PLOS declaration collected 34,000 signatures, but most signatories didn’t follow through. PLOS as a publisher was met with scepticism; and PLOS ONE with derision. It takes a certain strength of mind and spirit to keep on truckin’ through that kind of setback, and we can all be grateful that Mike’s was one of the hands on the wheel.
At a much earlier stage in her career, Erin’s pledge to extreme openness reflects Mike’s. It’s good to see that so far, it’s helping rather than harming her career.
(And how is it going? Watch her talk, which follows Mike’s, to find out. You won’t regret it.)
There is so, so much more that I could say about OpenCon. Listing all the inspiring people that I met, alone, would be too much for one blog-post. I will just briefly mention some of those that I have known by email/blog/Twitter for some time, but met in the flesh for the first time: Mike Eisen and Erin McKiernan both fall into that category; so do Björn Brembs, Melissa Hagemann, Geoff Bilder and Danny Kingsley. I could have had an amazing time just talking to people even if I’d missed all the sessions. (Apologies to everyone I’ve not mentioned.)
Oh, and how often do you get to rub shoulders with Jimmy Wales?
(That’s Jon Tennant in between Jimmy and me, and Mike Eisen trying, but not quite succeeding, to photobomb us from behind.)
And yet, even with global superstars around, the part of the weekend that impressed me the most was a small breakout session where I found myself in a room with a dozen people I’d never met before, didn’t recognise, and hadn’t heard of. As we went around the room and did introductions, every single one of them was doing something awesome. They were helping a scholarly society to switch to OA publishing, or funding open projects in the developing world, or driving a university’s adoption of an OA policy, or creating a new repository for unpublished papers, or something. (I really wish I’d written them all down.)
The sheer amount of innovation and hard work that’s going on just blew me away. So: OpenCon 2015 community, I salute you! May we meet again!
Update (Saturday 21 November 2015)
Here is the conference photo, taken by Slobodan Radicev, CC by:
And here’s a close-up of the bit with me, honoured to be sandwiched between the founders of Public Library of Science and the Open Library of Humanities! (That’s Mike Eisen to the left, and Martin Eve to the right.)
October 6, 2015
I have a new preprint up at PeerJ (Taylor 2015), and have also submitted it simultaneously for peer review. In a sense, it’s not a paper I am happy about, as its title explains: “Almost all known sauropod necks are incomplete and distorted“.
This paper has been a while coming, and much of the content will be familiar to long-time readers, as quite a bit of it is derived from three SV-POW! posts: How long was the neck of Diplodocus? (2011), Measuring the elongation of vertebrae (2013) and The Field Museum’s photo-archives tumblr, featuring: airbrushing dorsals (2014). It also uses the first half of my 2011 SVPCA talk, Sauropod necks: how much do we really know? (and the second half became the seed that grew into our 2013 neck-cartilage paper.)
So in one sense, publishing this is a bit of a mopping up exercise. But it’s also more than that, because I think it’s important to get all these observations (and the relevant literature review) down all in one place, to help us recognise just how serious the problem is. There are, to a first approximation, no complete sauropod necks in the published literature. And the vertebrae of the necks we do have are crushed to the point where trying to articulate them is close to meaningless.
I’m not happy about this. But I think it’s important to face the reality and be honest with ourselves about how much we can really know about sauropod necks. There’s a lot we can do in a qualitative way, but most quantitative results are going to be swamped in supposition and error.
October 4, 2015
Preprints are in the air! A few weeks ago, Stephen Curry had a piece about them in the Guardian (Peer review, preprints and the speed of science) and pterosaur palaeontologist Liz Martin published Preprints in science on her blog Musings of Clumsy Palaeontologist. The latter in particular has spawned a prolific and fascinating comment stream. Then SV-POW!’s favourite journal, PeerJ, weighed in on its own blog with A PeerJ PrePrint – so just what is that exactly?.
Following on from that, I was invited to contribute a guest-post to the PeerJ blog: they’re asking several people about their experiences with PeerJ Preprints, and publishing the results in a series. I started to write my answers in an email, but they soon got long enough that I concluded it made more sense to write my own post instead. This is that post.
As a matter of fact, I’ve submitted four PeerJ preprints, and all of them for quite different reasons.
1. Barosaurus neck. I and Matt submitted the Barosaurus manuscript as a preprint because we wanted to get feedback as quickly as possible. We certainly got it: four very long detailed comments that were more helpful than most formally solicited peer-reviews that I’ve had. (It’s to our discredit that we didn’t then turn the manuscript around immediately, taking those reviews into a account. We do still plan to do this, but other things happened.)
2. Dinosaur diversity. Back in 2004 I submitted my first ever scientific paper, a survey of dinosaur diversity broken down in various ways. It was rejected (for what I thought were spurious reasons, but let it pass). The more time that passed, the more out of date the statistics became. As my interests progressed in other directions, I reached the point of realising that I was never going to get around to bringing that paper up to date and resubmitting it to a journal. Rather than let it be lost to the world, when I think it still contains much that is of interest, I published it as a pre-print (although it’s not pre- anything: what’s posted is the final version).
3. Cartilage angles. Matt and I had a paper published on PLOS ONE in 2013, on the effect that intervertebral cartilage had on sauropod neck posture. Only after it was published did I realise that there was a very simple way to quantify the geometric effect. I wrote what was intended to be a one-pager on that, planning to issue it as a sort of erratum. It ended up much longer than expected, but because I considered it to be material that should really have been in the original PLOS ONE paper, I wanted to get it out as soon as possible. So as soon as the manuscript was ready, I submitted it simultaneously as a preprint and onto the peer-review track at PeerJ. (It was published seven weeks later.)
4. Apatosaurine necks. Finally, I gave a talk at this year’s SVPCA (Symposium on Vertebrate Palaeontology and Comparative Anatomy), based on an in-progress manuscript in which I am second author to Matt. The proceedings of the symposium are emerging as a PeerJ Collection, and I and the other authors wanted our paper to be a part of that collection. So I submitted the abstract of the talk I gave, with the slide-deck as supplementary information. In time, this version of the preprint will be superseded by the completed manuscript, and eventually (we hope) by the peer-reviewed paper.
So the thing to take away from this is that there are lots of reasons to publish preprints. They open up different ways of thinking about the publication process.
September 10, 2015
Wouldn’t it be great if, after a meeting like the 2015 SVPCA, there was a published set of proceedings? A special issue of a journal, perhaps, that collected papers that emerge from the work presented there.
Of course the problem with special issues, and edited volumes in general, is that they take forever to come out. After the Dinosaurs: A Historical Perspective conference on 6 May 2008, I got my talk on the history of sauropod research written up and submitted on 7 August, just over three months later. It took another five and a half months to make it through peer-review to acceptance. And then … nothing. It sat in limbo for a year and nine months before it was finally published, because of course the book couldn’t be finalised until the slowest of the 50 or so authors, editors and reviewers had done their jobs.
There has to be a better way, doesn’t there?
Rhetorical question, there. There is a better way, and unsurprisingly to regular readers, it’s PeerJ that has pioneered it. In PeerJ Collections, papers can be added at any time, and each one is published as it’s ready. Better still, the whole lifecycle of the paper can (if the authors wish) be visible from the collection. You can start by posting the talk abstract, then replace it with a preprint of the complete manuscript when it’s ready, and finally replace that with the published version of the paper once it’s been through peer-review.
Take a look, for example, at the collection for the 3rd International Whale Shark Conference (which by the way was held at the Georgia Aquarium, Atlanta, which has awesome whale sharks on view.)
As you can see from the collection (at the time of writing), only one of the constituent papers — Laser photogrammetry improves size and demographic estimates for whale sharks — has actually been published so far. But a dozen other papers exist in preprint form. That means that the people who attended the conference, saw the talks and want to refer to them in their work have something to cite.
The hot news is that Mark Young and the other SVPCA 2015 organisers have arranged for PeerJ to set up an SPPC/SVPCA 2015 Collection. I think this is just marvellous — the best possible way to make a permanent record of an important event.
The collection is very new: at the time of writing, it hosts only five abstracts (one of them ours). We’re looking forward to seeing others added. Some of the abstracts (including ours) have the slides of the talk attached as supplementary information.
Although I’m lead author on the talk (because I prepared the slides and delivered the presentation), this project is really Matt’s baby. There is a Wedel et al. manuscript in prep already, so we hope that within a month or two we’ll be able to replace the abstract with a complete manuscript. Then of course we’ll put it through peer-review.
I hope plenty of other SVPCA 2015 speakers will do the same. Even those who, for whatever reason, don’t want to publish their work in PeerJ, can use the collection as a home for their abstracts and preprints, then go off and submit the final manuscript elsewhere.
July 16, 2015
I have watched several people go through this sequence.
- DENIAL. PeerJ? What even is this thing? I’ll send my work to a real journal, thanks.
- THAWING. Huh, so-and-so published in PeerJ, it must not be that bad.
- GRUDGING SUBMISSION. Oh, okay, I’ll send them this one thing. I still have reservations but I want this out quickly. And I’m tired of getting rejected because some asshat thinks my paper isn’t sexy enough.
- AWAKENING. Wow, that was a lot faster, easier, and less painful than I expected. And the result is awesome.
- ACCEPTANCE. Why would I send my work anywhere else? No, really, I’m trying to think of a reason.