As a long-standing proponent of preprints, it bothers me that of all PeerJ’s preprints, by far the one that has had the most attention is Terrell et al. (2016)’s Gender bias in open source: Pull request acceptance of women versus men. Not helped by a misleading abstract, we’ve been getting headlines like these:

But in fact, as Kate Jeffrey points out in a comment on the preprint (emphasis added):

The study is nice but the data presentation, interpretation and discussion are very misleading. The introduction primes a clear expectation that women will be discriminated against while the data of course show the opposite. After a very large amount of data trawling, guided by a clear bias, you found a very small effect when the subjects were divided in two (insiders vs outsiders) and then in two again (gendered vs non-gendered). These manipulations (which some might call “p-hacking”) were not statistically compensated for. Furthermore, you present the fall in acceptance for women who are identified by gender, but don’t note that men who were identified also had a lower acceptance rate. In fact, the difference between men and women, which you have visually amplified by starting your y-axis at 60% (an egregious practice) is minuscule. The prominence given to this non-effect in the abstract, and the way this imposes an interpretation on the “gender bias” in your title, is therefore unwarranted.

And James Best, in another comment, explains:

Your most statistically significant results seem to be that […] reporting gender has a large negative effect on acceptance for all outsiders, male and female. These two main results should be in the abstract. In your abstract you really should not be making strong claims about this paper showing bias against women because it doesn’t. For the inside group it looks like the bias moderately favours women. For the outside group the biggest effect is the drop for both genders. You should hence be stating that it is difficult to understand the implications for bias in the outside group because it appears the main bias is against people with any gender vs people who are gender neutral.

Here is the key graph from the paper:

TerrellEtAl2016-fig5(The legends within the figure are tiny: on the Y-axes, they both read “acceptance rate”; and along the X-axis, from left to right, they read “Gender-Neutral”, “Gendered” and then again “Gender-Neutral”, “Gendered”.)

So James Best’s analysis is correct: the real finding of the study is a truly bizarre one, that disclosing your gender whatever that gender is reduces the chance of code being accepted. For “insiders” (members of the project team), the effect is slightly stronger for men; for “outsiders” it is rather stronger for women. (Note by the way that all the differences are much less than they appear, because the Y-axis runs from 60% to 90%, not 0% to 100%.)

Why didn’t the authors report this truly fascinating finding in their abstract? It’s difficult to know, but it’s hard not to at least wonder whether they felt that the story they told would get more attention than their actual findings — a feeling that has certainly been confirmed by sensationalist stories like Sexism is rampant among programmers on GitHub, researchers find (Yahoo Finance).

I can’t help but think of Alan Sokal’s conclusion on why his obviously fake paper in the physics of gender studies was accepted by Social Text:it flattered the editors’ ideological preconceptions“. It saddens me to think that there are people out there who actively want to believe that women are discriminated against, even in areas where the data says they are not. Folks, let’s not invent bad news.

Would this study have been published in its present form?

This is the big question. As noted, I am a big fan of preprints. But I think that the misleading reporting in the gender-bias paper would not make it through peer-review — as the many critical comments on the preprint certainly suggest. Had this paper taken a conventional route to publication, with pre-publication review, then I doubt we would now be seeing the present sequence of misleading headlines in respected venues, and the flood of gleeful “see-I-told-you” tweets.

(And what do those headlines and tweets achieve? One thing I am quite sure they will not do is encourage more women to start coding and contributing to open-source projects. Quite the opposite: any women taking these headlines at face value will surely be discouraged.)

So in this case, I think the fact that the study in its present form appeared on such an official-looking venue as PeerJ Preprints has contributed to the avalanche of unfortunate reporting. I don’t quite know what to do with that observation.

What’s for sure is that no-one comes out of this as winners: not GitHub, whose reputation has been wrongly slandered; not the authors, whose reporting has been shown to be misleading; not the media outlets who have leapt uncritically on a sensational story; not the tweeters who have spread alarm and despondancy; not PeerJ Preprints, which has unwittingly lent a veneer of authority to this car-crash. And most of all, not the women who will now be discouraged from contributing to open-source projects.

 

Thirteen years ago, Kenneth Adelman photographed part of the California coastline from the air. His images were published as part of a set of 12,000 in the California Coastal Records Project. One of those photos showed the Malibu home of the singer Barbra Streisand.

In one of the most ill-considered moves in history, Streisand sued Adelman for violation of privacy. As a direct result, the photo — which had at that point been downloaded four times — was downloaded a further 420,000 times from the CCRP web-site alone. Meanwhile, the photo was republished all over the Web and elsewhere, and has almost certainly now been seen by tens of millions of people.

Oh, look! There it is again!

Oh, look! There it is again!

Last year, the tiny special-interest academic-paper search-engine Sci-Hub was trundling along in the shadows, unnoticed by almost everyone.

In one of the most ill-considered moves in history, Elsevier sued Sci-Hub for lost revenue. As a direct result, Sci-Hub is now getting publicity in venues like the International Business Times, Russia Today, The Atlantic, Science Alert and more. It’s hard to imagine any other way Sci-Hub could have reached this many people this quickly.

5WaysToStopSabotagingYourSuccessArticle

I’m not discussing at the moment whether what Sci-Hub is doing is right or wrong. What’s certainly true is (A) it’s doing it, and (B) many, many people now know about it.

It’s going to be hard to Elsevier to get this genie back into the bottle. They’ve already shut down the original sci-hub.com domain, only to find it immediately popping up again as sci-hub.io. That’s going to be a much harder domain for them to shut down, and even if they manage it, the Sci-Hub operators will not find it difficult to get another one. (They may already have several more lined up and ready to deploy, for all I know.)

So you’d think the last thing they’d want to do is tell the world all about it.

[Note: Mike asked me to scrape a couple of comments on his last post – this one and this one – and turn them into a post of their own. I’ve edited them lightly to hopefully improve the flow, but I’ve tried not to tinker with the guts.]

This is the fourth in a series of posts on how researchers might better be evaluated and compared. In the first post, Mike introduced his new paper and described the scope and importance of the problem. Then in the next post, he introduced the idea of the LWM, or Less Wrong Metric, and the basic mathemetical framework for calculating LWMs. Most recently, Mike talked about choosing parameters for the LWM, and drilled down to a fundamental question: (how) do we identify good research?

Let me say up front that I am fully convicted about the problem of evaluating researchers fairly. It is a question of direct and timely importance to me. I serve on the Promotion & Tenure committees of two colleges at Western University of Health Sciences, and I want to make good decisions that can be backed up with evidence. But anyone who has been in academia for long knows of people who have had their careers mangled, by getting caught in institutional machinery that is not well-suited for fairly evaluating scholarship. So I desperately want better metrics to catch on, to improve my own situation and those of researchers everywhere.

For all of those reasons and more, I admire the work that Mike has done in conceiving the LWM. But I’m pretty pessimistic about its future.

I think there is a widespread misapprehension that we got here because people and institutions were looking for good metrics, like the LWM, and we ended up with things like impact factors and citation counts because no-one had thought up anything better. Implying a temporal sequence of:

1. Deliberately looking for metrics to evaluate researchers.
2. Finding some.
3. Trying to improve those metrics, or replace them with better ones.

I’m pretty sure this is exactly backwards: the metrics that we use to evaluate researchers are mostly simple – easy to explain, easy to count (the hanky-panky behind impact factors notwithstanding) – and therefore they spread like wildfire, and therefore they became used in evaluation. Implying a very different sequence:

1. A metric is invented, often for a reason completely unrelated to evaluating researchers (impact factors started out as a way for librarians to rank journals, not for administration to rank faculty!).
2. Because a metric is simple, it becomes widespread.
3. Because a metric is both simple and widespread, it makes it easy to compare people in wildly different circumstances (whether or not that comparison is valid or defensible!), so it rapidly evolves from being trivia about a researcher, to being a defining character of a researcher – at least when it comes to institutional evaluation.

If that’s true, then any metric aimed for wide-scale adoption needs to be as simple as possible. I can explain the h-index or i10 index in one sentence. “Citation count” is self-explanatory. The fundamentals of the impact factor can be grasped in about 30 seconds, and even the complicated backstory can be conveyed in about 5 minutes.

In addition to being simple, the metric needs to work the same way across institutions and disciplines. I can compare my h-index with that of an endowed chair at Cambridge, a curator at a small regional museum, and a postdoc at Podunk State, and it Just Works without any tinkering or subjective decisions on the part of the user (other than What Counts – but that affects all metrics dealing with publications, so no one metric is better off than any other on that score).

I fear that the LWM as conceived in Taylor (2016) is doomed, for the following reasons:

  • It’s too complex. It would probably be doomed if it had just a single term with a constant and an exponent (which I realize would defeat the purpose of having either a constant or an exponent), because that’s more math than either an impact factor or an h-index requires (perceptively, anyway – in the real world, most people’s eyes glaze over when the exponents come out).
  • Worse, it requires loads of subjective decisions and assigning importance on the part of the users.
  • And fatally, it would require a mountain of committee work to sort that out. I doubt if I could get the faculty in just one department to agree on a set of terms, constants, and exponents for the LWM, much less a college, much less a university, much less all of the universities, museums, government and private labs, and other places where research is done. And without the promise of universal applicability, there’s no incentive for any institution to put itself through the hell of work it would take to implement.

Really, the only way I think the LWM could get into place is by fiat, by a government body. If the EPA comes up with a more complicated but also more accurate way to measure, say, airborne particle output from car exhausts, they can theoretically say to the auto industry, “Meet this standard or stop selling cars in the US” (I know there’s a lot more legislative and legal push and pull than that, but it’s at least possible). And such a standard might be adopted globally, either because it’s a good idea so it spreads, or because the US strong-arms other countries into following suit.

Even if I trusted the US Department of Education to fill in all of the blanks for an LWM, I don’t know that they’d have the same leverage to get it adopted. I doubt that the DofE has enough sway to get it adopted even across all of the educational institutions. Who would want that fight, for such a nebulous pay-off? And even if it could be successfully inflicted on educational institutions (which sounds negative, but that’s precisely how the institutions would see it), what about the numerous and in some cases well-funded research labs and museums that don’t fall under the DofE’s purview? And that’s just in the US. The culture of higher education and scholarship varies a lot among countries. Which may be why the one-size-fits-all solutions suck – I am starting to wonder if a metric needs to be broken, to be globally applicable.

The problem here is that the user base is so diverse that the only way metrics get adopted is voluntarily. So the challenge for any LWM is to be:

  1. Better than existing metrics – this is the easy part – and,
  2. Simple enough to be both easily grasped, and applied with minimal effort. In Malcolm Gladwell Tipping Point terms, it needs to be “sticky”. Although a better adjective for passage through the intestines of academia might be “smooth” – that is, having no rough edges, like exponents or overtly subjective decisions*, that would cause it to snag.

* Calculating an impact factor involves plenty of subjective decisions, but it has the advantages that (a) the users can pretend otherwise, because (b) ISI does the ‘work’ for them.

At least from my point of view, the LWM as Mike has conceived it is awesome and possibly unimprovable on the first point (in that practically any other metric could be seen as a degenerate case of the LWM), but dismal and possibly pessimal on the second one, in that it requires mounds of subjective decision-making to work at all. You can’t even get a default number and then iteratively improve it without investing heavily in advance.

An interesting thought experiment would be to approach the problem from the other side: invent as many new simple metrics as possible, and then see if any of them offer advantages over the existing ones. Although I have a feeling that people are already working on that, and have been for some time.

Simple, broken metrics like impact factor are the prions of scholarship. Yes, viruses are more versatile and cells more versatile still, by orders of magnitude, but compared to prions, cells take an awesome amount of effort to build and maintain. If you just want to infect someone and you don’t care how, prions are very hard to beat. And they’re so subtle in their machinations that we only became aware of them comparatively recently – much like the emerging problems with “classical” (e.g., non-alt) metrics.

I’d love to be wrong about all of this. I proposed the strongest criticism of the LWM I could think of, in hopes that someone would come along and tear it down. Please start swinging.

ostrich peeing

cormorant peeing

alligator peeing

Stand by . . . grumpy old man routine compiling . . . 

So, someone at Sony decided that an Angry Birds movie would be a good idea, about three years after the Angry Birds “having a moment” moment was over. There’s a trailer for it now, and at the end of the trailer, a bird pees for like 17 seconds (which is about 1/7 of my personal record, but whatever).

And now I see these Poindexters all over the internet pushing their glasses up their noses and typing, “But everyone knows that birds don’t pee! They make uric acid instead! That’s the white stuff in ‘bird poop’. Dur-hur-hur-hurrr!” I am reasonably sure these are the same people who harped on the “inaccuracy” of the peeing Postosuchus in Walking With Dinosaurs two decades ago. (Honestly, how I didn’t get this written and posted in our first year of blogging is quite beyond my capacity.)

Congratulations, IFLScientists, on knowing One Fact about nature. Tragically for you, nature knows countless facts, and among them are that birds and crocodilians can pee. And since extant dinosaurs can and do pee, extinct ones probably could as well.

So, you know . . . try to show a little respect.

So, you know . . . try to show a little respect.

Now, it is true that crocs (mostly) and birds (always?) release more of their nitrogenous waste as uric acid than as urea. But their bodies produce both compounds. So does yours. We mammals are just shifted waaaay more heavily toward urea than uric acid, and extant archosaurs – and many (but not all) other reptiles to boot – are shifted waaaay more heavily toward uric acid than urea. Alligators also make a crapload of ammonia, but that’s a story for another time.

BUT, crucially, birds and crocs almost always release some clear, watery, urea-containing fluid when they dump the whitish uric acid, as shown in this helpful diagram that I stole from International Cockatiel Resource:

International Cockatiel Resource bird pee guide

If you’ve never seen this, you’re just not getting to the bird poop fast enough – the urine is drying up before you notice it. Pick up the pace!

Sometimes birds and crocs save up a large quantity of fluid, and then flush everything out of their cloacas and lower intestines in one shot, as shown in the photos dribbled through this post. Which has led to some erroneous reports that ostriches have urinary bladders. They don’t, they just back up lots of urine into their colons. Many birds recapture some water and minerals that way, and thereby concentrate their wastes and save water – basically using the colon as a sort of second-stage kidney (Skadhauge 1976).

Rhea peeing by Markus Buhler

Many thanks to Markus Bühler for permission to post his well-timed u-rhea photo.

[UPDATE the next day: To be perfectly clear, all that’s going on here is that the birds and crocs keep their cloacal sphincters closed. The kidneys keep on producing urine and uric acid, and with no way out (closed sphincter) and nowhere else to go (no bladder – although urinary bladders have evolved repeatedly in lizards), the pee backs up into the colon. So if you’re wondering if extinct dinosaurs needed some kind of special adaptation to be able to pee, the answer is no. Peeing is an inherent possibility, and in fact the default setting, for any reptile that can keep its cloaca shut.]

Aaaanyway, all those white urate solids tend to make bird pee more whitish than yellow, as shown in the photos. I have seen a photo of an ostrich making a good solid stream from cloaca to ground that was yellow, but that was years ago and frustratingly I haven’t been able to relocate it. Crocodilians seem to have no problem making a clear, yellowish pee-stream, as you can see in many hilarious YouTube videos of gators peeing on herpetologists and reporters, which I am putting at the bottom of this post so as not to break up the flow of the rant.

ostrich excreting

You can explore this “secret history” of archosaur pee by entering the appropriate search terms into Google Scholar, where you’ll find papers with titles like:

  • “Technique for the collection of clear urine from the Nile crocodile (Crocodylus niloticus)” (Myburgh et al. 2012)
  • “Movement of urine in the lower colon and cloaca of ostriches” (Duke et al. 1995)
  • “Plasma homeostasis and cloacal urine composition in Crocodylus porosus caught along a salinity gradient” (Grigg 1981)
  • “Cloacal absorption of urine in birds” (Skadhauge 1976)
  • “The cloacal storage of urine in the rooster” (Skadhauge 1968)

I’ve helpfully highlighted the operative term, to reinforce the main point of the post. Many of these papers are freely available – get the links from the References section below. A few are paywalled – really, Elsevier? $31.50 for a half-century-old paper on chicken pee? – but I’m saving them up, and I’ll be happy to lend a hand to other scholars who want to follow this stream of inquiry. If you’re really into the physiology of birds pooling pee in their poopers, the work of Erik Skadhauge will be a gold mine.

Now, to be fair, I seriously doubt that any bird has ever peed for 17 seconds. But the misinformation abroad on the net seems to be more about whether birds and other archosaurs can pee at all, rather than whether a normal amount of bird pee was exaggerated for comedic effect in the Angry Birds trailer.

ostrich excreting 3

In conclusion, birds and crocs can pee. Go tell the world.

And now, those gator peeing videos I promised:

UPDATE

Jan. 30, 2016: I just became aware that I had missed one of the best previous discussions of this topic, with one of the best videos, and the most relevant citations! The post is this one, by Brian Switek, which went up almost two years ago, the video is this excellent shot of an ostrich urinating and then defecating immediately after:

…and the citations are McCarville and Bishop (2002) – an SVP poster about a possible sauropod pee-scour, which is knew about but didn’t mention yet because I was saving it for a post of its own – and Fernandes et al. (2004) on some very convincing trace fossils of dinosaurs peeing on sand, from the Lower Cretaceous of Brazil. In addition to being cogent and well-illustrated, the Fernandes et al. paper has the lovely attribute of being freely available, here.

So, sorry, Brian, that I’d missed your post!

And for everyone else, stand by for another dinosaur pee post soon. And here’s one more video of an ostrich urinating (not pooping as the video title implies). The main event starts about 45 seconds in.

References

I was a bit disappointed to hear David Attenborough on BBC Radio 4 this morning, while trailing a forthcoming documentary, telling the interviewing that you can determine the mass of an extinct animal by measuring the circumference of its femur.

We all know what he was alluding to, of course: the idea first published by Anderson et al. (1985) that if you measure the life masses of lots of animals, then measuring their long-bone circumferences when they’ve died, you can plot the two measurements against each other, find a best-fit line, and extrapolate it to estimate the masses of dinosaurs based on their limb-bone measurements.

AndersonEtAl1985-dinosaur-masses-fig1

This approach has been extensively refined since 1985, most recently by Benson et al. (2014). but the principle is the same.

But the thing is, as Anderson et al. and other authors have made clear, the error-bars on this method are substantial. It’s not super-clear in the image above (Fig 1. from the Anderson et al. paper) because log-10 scales are used, but the 95% confidence interval is about 42 pixels tall, compared with 220 pixels for an order of magnitude (i.e. an increment of 1.0 on the log-10 scale). That means the interval is 42/220 = 0.2 of an order of magnitude. That’s a factor 10 ^ 0.2 = 1.58. In other words you could have two animals with equally robust femora, one of them nearly 60% heavier than the other, and they would both fall within the 95% confidence interval.

I’m surprised that someone as experienced and knowledgeable as Attenborough would perpetuate the idea that you can measure mass with any precision in this way (even more so when using only a femur, rather than the femur+humerus combo of Anderson et al.)

More: when the presenter told him that not all scientists buy the idea that the new titanosaur is the biggest known, he said that came as a surprise. Again, it’s disappointing that the documentary researchers didn’t make Attenborough aware of, for example, Paul Barrett’s cautionary comments or Matt Wedel’s carefully argued dissent. Ten minutes of simple research would have found this post — for example, it’s Google’s fourth hit for “how big is the new argentinian titanosaur”. I can only hope that the actual documentary, which screens on Sunday 24 January, doesn’t present the new titanosaur’s mass as a known and agreed number.

(To be clear, I am not blaming Attenborough for any of this. He is a presenter, not a palaeontologist, and should have been properly prepped by the researchers for the programme he’s fronting. He is also what can only be described as 89, so should be forgiven if he’s not quite as quick on his feel when confronted with an interviewer as he used to be.)

Update 1 (the next day)

Thanks to Victoria Arbour for pointing out an important reference that I missed: it was Campione and Evans (2012) who expanding Anderson et al.’s dataset and came up with the revised equation which Benson et al. used.

Update 2 (same day as #1)

It seems most commenters are inclined to go with Attenborough on this. That’s a surprise to me — I wonder whether he’s getting a free pass because of who he is. All I can say is that as I listened to the segment it struck me as really misleading. You can listen to it for yourself here if you’re in the UK; otherwise you’ll have to make do with this transcript:

“It’s surprising how much information you can get from just one bone. I mean for example that thigh bone, eight feet or so long, if you measure the circumference of that, you will be able to say how much weight that could have carried, because you know what the strength of bone is. So the estimate of weight is really pretty accurate and the thought is that this is something around over seventy tonnes in weight.”

(Note also that the Anderson et al./Campione and Evans method has absolutely nothing to do with the strength of bone.)

Also if interest was this segment that followed immediately:

How long it was depends on whether you think it held its neck out horizontaly or vertically. If it held it out horizontally, well then it would be about half as big again as the Diplodocus, which is the dinosaur that’s in the hall of the Natural History Museum. It would be absolutely huge.

Interviewer: And how tall, if we do all the dimensions?

Ah well that is again the question of how it holds its neck, and it could have certainly reached up about to the size of a four or five storey building.

Needless to say, the matter of neck posture is very relevant to our interests. I don’t want to read too much into a couple of throwaway comments, but the implication does seem to be that this is an issue that the documentary might spend some time on. We’ll see what happens.

References

I’d hoped that we’d see a flood of BRONTOSMASH-themed artwork, but that’s not quite happened. We’ve seen a trickle, though, and that’s still exciting. Here are the ones I know about. If anyone knows of more, please let me know and I will update this post.

First, in a comment on the post with my own awful attempts, Darius posted this sketch of a BROTOSMASH-themed intimidation display:

apatosaurinae_sp_scene

And in close-up:

apatosaurinae_sp_scene-closeup

Very elegant, and it’s nice to see an extension of our original hypothesis into other behaviours.

The next thing I saw was Mark Witton’s beautiful piece, described on his own site (in a post which coined the term BRONTOSMASH):

BRONTOSMASH Witton low res

And in close-up:

BRONTOSMASH Witton low res-closeup

I love the sense of bulk here — something of the elephant-seal extant analogue comes through — and the subdued colour scheme. Also, the Knight-style inclusion in the background of the individual in the swamp. (No, sauropods were not swamp-bound; but no doubt, like elephants, they spent at least some time in water.)

And finally (for now, at least) we have Matthew Inabinett’s piece, simply titled BRONTOSMASH:

brontosmash_by_cmipalaeo-d9dy1kg

I love the use of traditional materials here — yes, it still happens! — and I like the addition of the dorsal midline spike row to give us a full on TOBLERONE OF DOOM. (Also: the heads just look right. I wish I could do that. Maybe one day.)

Update (Monday 26 October)

Here is Oliver Demuth’s sketch, as pointed out by him in a comment.

uqske

Thanks, Oliver! Nice to see the ventral-on-dorsal combat style getting some love.

So that’s where we are, folks. Did I miss any? Is anyone working on new pieces on this theme? Post ’em in the comments!

 

We’re delighted to host this guest-blog on behalf of Richard Butler, Senior Research Fellow at the University of Birmingham, and guru of basal ornithischians. (Note that Matt and I don’t necessarily endorse or agree with everything Richard says; but we’re pleased to provide a forum for discussion.)


Dear friends and colleagues within the SVPCA community;

I am posting here courtesy of Mike and Matt with two objectives. First, I would like to provisionally offer Birmingham as a venue for the 2017 SVPCA meeting, with a host committee of myself, Ivan Sansom, and our postdocs and students. I propose to host the meeting at the University of Birmingham and the Lapworth Museum of Geology, following the latter’s redevelopment and reopening in 2016. Second, I would note that this offer is conditional on the implementation of some changes in SVPCA organisation that I believe will help secure the future of the meeting, while retaining its current atmosphere. Although I have already discussed these proposed changes with many colleagues via email, a broad scale and open consultancy and discussion within the community is needed, hence this post and open comment section.

Despite the apparent success of recent meetings, there are a couple of factors that give me substantial concern about its future. There is a trend, noted by several people, toward increasing disengagement from a large component of the early postdoctoral career and established academic community, with many of these individuals (including myself) attending SVPCA less and less frequently. Numbers provided by Richard Forrest show a small but steady decline in the number of people paying full registration (i.e. non-students) over the last five years. Having discussed this with a number of colleagues, it is clear to me that it stems from multiple reasons, including meeting length and structure, ever-increasing time constraints, and competition with the myriad other meetings such as PalAss, SVP, EAVP, ICVM etc. This disengagement is worrying for a number of reasons, but perhaps most pressingly because it is exactly this part of the vertebrate palaeontology community who are generally expected to organise the meeting in future years.

I am also concerned that people are not queuing up to organise the meeting. We are just about getting by from year to year, but offers are sparse at exactly a time when there are almost certainly more vertebrate palaeontologists employed in the UK than ever before. Why is this? Well, taking on the organization of SVPCA in its current form is not exactly attractive in the current academic world of REF, impact, museum cuts, and the ongoing marketization of universities, with charges for the use of lecture theatres and other spaces increasing rapidly. The meeting is long relative to its size (particularly when SPPC is considered) and its budget is low, and the lack of any formal organization to SVPCA means that there is limited support or continuity from year to year. Hosting it is unlikely to substantially enhance your CV, but it will certainly impact negatively on your other outputs (i.e. papers, grant applications) for that year. We risk reaching a point in the near future where there is no-one willing to host the meeting and the meeting grinds to a halt.

My proposal is that the meeting could bear a small degree of formalisation and modernisation without losing its character, and doing so would ease pressures on hosts. Following discussion with a broad range of colleagues within the SVPCA community, I am proposing that a small SVPCA steering group be established as part of the planning for the Birmingham meeting. This steering group could be established in a simple, representative, democratic, cost-free, and light-touch manner. This group would not need to meet in person other than at SVPCA itself so there would be no financial cost. There would then be an open and democratic basis for deciding upon the future of the meeting and ensuring continuity from year to year.

This committee could come up with an agreed list of recommendations for how the meeting should be organised in the future, addressing topics such as meeting length, the role of SPPC, the relationship of the meeting to PalAss (who already provide significant financial and logistic support), the abstract review process, and innovations such as lightning talks, workshops and keynotes. It could also find solutions to the significant logistic issues to do with bank accounts, payments and the like, all of which place unnecessary strain on the local organisers. Local organisers would still have considerable autonomy, but they would receive more support.

As an initial proposal I suggest a small committee that attempts to represent the different communities that make up SVPCA. The last and next meeting hosts should be on, as well as perhaps five additional elected members, serving limited terms, to represent the student, early career researcher (up to 10 years post-PhD), senior academic, museum, and non-professional communities. Pretty much all of the feedback from colleagues for this idea to date has been positive. Note that this does not imply the formation of a formal society (although that would be an option that a steering committee could discuss), and nor does it challenge many of the aspects of SVPCA that so many of us find attractive, such as its friendly atmosphere or the absence of parallel sessions. I hope it will provide a framework for us to continue to promote scientific excellence and drive up standards in UK vertebrate palaeontology, and help secure the future of the meeting for the next 60 years. I would love to hear any opinions that the community has on this proposal, and the future of SVPCA more broadly.

Ten years ago today — on 15 September 2005 — my first palaeo paper was published: Taylor and Naish (2005) on the phylogenetic nomenclature of diplodocoids. It’s strange to think how fast the time has gone, but I hope you’ll forgive me if I get a bit self-indulgent and nostalgic.

TaylorNaish2005-diplodocoid-taxonomy-ABSTRACT

I’d applied to join Portsmouth University on a Masters course back in April 2004 — not because I had any great desire to earn a Masters but because back in the bad old days, being affiliated to a university was about the only way to get hold of copies of academic papers. My research proposal, hilariously, was all about the ways the DinoMorph results are misleading — something that I am still working on eleven years later.

In May of that year, I started a Dinosaur Mailing List thread on the names and definitions of the various diplodocoid clades. As that discussion progressed, it became clear that there was a lot of ambiguity, and for my own reference I started to make notes. I got into an off-list email discussion about this with Darren Naish (who was then finishing up his Ph.D at Portsmouth). By June we thought it might be worth making this into a little paper, so that others wouldn’t need to do the same literature trawl we’d done.

In September of 2004, I committed to the Portsmouth course, sending my tuition fees in a letter that ended:

tuition-fees-letter

On the way to SVPCA that year, in Leicester, I met Darren on the train, and together we worked through a printed copy of the in-progress manuscript that I’d brought with me. He was pretty happy with it, which meant a lot to me. It was the first time I’d had a legitimate palaeontologist critique my work.

At one of the evening events of that SVPCA, I fell into conversation with micro-vertebrate screening wizard Steve Sweetman, then on the Portsmouth Ph.D course, and he persuaded me to switch to the Ph.D. (It was my second SVPCA, and the first one where I gave a talk.) Hilariously, the heart of the Ph.D project was to be a description of the Archbishop, something that I have still not got done a decade later, but definitely will this year. Definitely.

On 7th October 2004, we submitted the manuscript to the Journal of Paleontology, and got an acknowledge of receipt<sarcasm>after just 18 short days</sarcasm>. But three months later (21st January 2005) it was rejected on the advice of two reviewers. As I summarised the verdict to Darren at the time:

It’s a rejection. Both reviewers (an anonymous one and [redacted]) say that the science is pretty much fine, but that there just isn’t that much to say to make the paper worthwhile. [The handling editor] concurs in quite a nice covering letter […] Although I think the bit about “I respect both of you a great deal” is another case of Wrong Mike Taylor Syndrome :-)

This was my first encounter with “not significant enough for our journal” — a game that I no longer play. It was to be very far from my last experience of Wrong Mike Taylor Syndrome.

At this point, Darren and I spent a while discussing what to do: revise and resubmit (though one of the reviewers said not to)? Try to subsume the paper into another more substantial one (as one reviewer suggested)? Invite the reviewers to collaborate with us on an improved version (as the editor suggested)? Or just revise according to the reviewers’ more helpful recommendations and send it elsewhere? I discussed this with Matt as well. The upshot was that on 20th February Darren and I decided to send the revised version to PaleoBios, the journal of the University of California Museum of Paleontology (UCMP) — partly because Matt had had good experiences there with two of his earlier papers.

[Side-note: I am delighted to see that, since I last checked, PaleoBios has now made the leap to open access, though as of yet it says nothing about the licence it uses.]

Anyway, we submitted the revised manuscript on 26th May; and we got back an Accept With Minor Revisions six weeks later, having received genuinely useful reviews from Jerry Harris and Matt. (This of course was long before I’d co-authored anything with Matt. No handling editor would assign him to review one of my papers now.) It took us two days to turn the manuscript around with the necessary minor changes made, and another nine days of back and forth with the editor before we reached acceptance. A week later I got the proof PDF to check.

Back in 2005, publication was a very different process, because it involved paper. I remember the thrill of several distinct phases in the publication process — particularly sharp the first time:

  • Seeing the page proof — evidence that I really had written a legitimate scholarly paper. It looked real.
  • The moment of being told that the paper was published: “The issue just went to the printer, so I will send the new reprints […] when I get them, probably sometime next week.”
  • Getting my copy of the final PDF.
  • The day that the physical reprints arrived — funny to think that they used to be a thing. (They’re so Ten Years Ago now that even the SVPCA auction didn’t have many available for bid.)
  • The tedious but somehow exhilarating process of sending out physical reprints to 30 or 40 people.
  • Getting a physical copy of the relevant issue of the journal — in this case, PaleoBios 25(2).

I suppose it’s one of the sadder side-effect of ubiquitous open access that many of these stages don’t happen any more. Now you get your proof, then the paper appears online, and that’s it. Bam, done.

I’m kind of glad to have lived through the tail end of the old days, even though the new days are better.

To finish, there’s a nice little happy ending for this paper. Despite being in a relatively unregarded journal, it’s turned out to be among my most cited works. According to Google Scholar, this humble little taxonomic note has racked up 28 citations: only two fewer than the Xenoposeidon description. It’s handily outperforming other papers that I’d have considered much more substantial, and which appeared in more recognised journals. It just goes to show, you can never tell what papers will do well in the citation game, and which will sink without trace.

References

You know what’s wrong with scholarly publishing?

Wait, scrub that question. We’ll be here all day. Let me jump straight to the chase and tell you the specific problem with scholarly publishing that I’m thinking of.

There’s nowhere to go to find all open-access papers, to download their metadata, to access it via an open API, to find out what’s new, to act as a platform for the development of new tools. Yes, there’s PubMed Central, but that’s only for work funded by the NIH. Yes, there’s Google Scholar, but that has no API, and at any moment could go the way of Google Wave and Google Reader when Google loses interest.

Instead, we have something like 4000 repositories out there, balkanised by institution, by geographical region, and by subject area. They have different UIs, different underlying data models, different APIs (if any). They’re built on different software platforms. It’s a jungle out there!

81zeSfGzaUL._SL1500_

As researchers, we don’t need 4000 repos. You know what we need? One Repo.

Hey! That would be a good name for a project!

I’ve mentioned before how awesome and pro-open my employers, Index Data, are. (For those who are not regular readers, I’m a palaeontologist only in my spare time. By day, I’m a software engineer.) Now we’re working on an index of green/gold OA publishing. Metadata of every article across every repository and publisher. We want it to be complete, in the sense that we will be going aggressively for the long tail as opposed to focusing on some region or speciality, or things that are easily harvestable by OAI-PMH or other standards. We want it to be of a high, consistent quality in terms of metadata. We want it to be up to date. And most importantly, we want it to be fully open for all and any kind of re-use, by any other actor. This will include downloadable data files, OAI-PMH access, search-retrieve web services, embeddable widgets and more. We also envisage a Linked Data representation with a CRUD interface that allows third parties to contribute supplemental information, entity reconciliation, tagging, etc.

Instead of 4000 fragments, one big, meaty chunk of data.

bodyCover_334

Because we at Index Data have spent the last ten years helping aggregators and publishers and others getting access to difficult-to-access information through all kinds of crazy mechanisms, we have a unique combination of the skills, the tools, and the desire to pursue this venture.

So The One Repo is born. At the noment, we have:

  • Harvesting set up for an initial set of 20 repositories.
  • A demonstrator of one possible UI.
  • A whitepaper describing the motivation and some of the technical aspects.
  • A blog about the project’s progress.
  • An advisory board of some of the brightest, most experienced and wisest people in the world of open access.

We’ve been flying under the radar for the last month and a bit. Now we’re ready for the world to know what we’re up to.

The One Repo is go!

Re-reading an email that Matt sent me back in January, I see this:

One quick point about [an interesting sauropod specimen]. I can envision writing that up as a short descriptive paper, basically to say, “Hey, look at this weird thing we found! Morrison sauropod diversity is still underestimated!” But I honestly doubt that we’ll ever get to it — we have literally years of other, more pressing work in front of us. So maybe we should just do an SV-POW! post about the weirdness of [that specimen], so that the World Will Know.

Although as soon as I write that, I think, “Screw that, I’m going to wait until I’m not busy* and then just take a single week* and rock out a wiper* on it.”

I realize that this way of thinking represents a profound and possibly psychotic break with reality. *Thrice! But it still creeps up on me.

(For anyone not familiar with the the “wiper”, it refers to a short paper of only one or two pages. The etymology is left as an exercise to the reader.)

It’s just amazing how we keep on and on falling for this delusion that we can get a paper out quickly, even when we know perfectly well, going into the project, that it’s not going to work out that way. To pick a recent example, my paper on quantifying the effect of intervertebral cartilage on neutral posture was intended to be literally one page, an addendum to the earlier paper on cartilage: title, one paragraph of intro, diagram, equation, single reference, DONE! Instead, it landed up being 11 pages long with five illustrations and two tables.

I think it’s a reasonable approximation to say that any given project will require about an order of magnitude more work than we expect at the outset.

Even as I write this, the top of my palaeo-work priority list is a paper that I’m working on with Matt and two other colleagues, which he kicked off on 6 May, writing:

I really, really want to kill this off absolutely ASAP. Like, seriously, within a week or two. Is that cool? Is that doable?

To which I idiotically replied:

IT SHALL BE SO!

A month and a bit later, the answers to Matt’s questions are clear. Yes, it’s cool; and no, it’s not doable.

The thing is, I think that’s … kind of OK. The upshot is that we end up writing reasonably substantial papers, which is after all what we’re meant to be trying to do. If the reasonably substantial papers that end up getting written aren’t necessarily the ones we thought they were going to be, well, that’s not a problem. After all, as I’ve noted before, my entire Ph.D dissertation was composed of side-projects, and I never got around to doing the main project. That’s fine.

In 2011, Matt’s tutorial on how to find problems to work on discussed in detail how projects grow and mutate and anastamose. I’m giving up on thinking that this is a bad thing, abandoning the idea that I ought to be in control of my own research program. I’m just going to keep chasing whatever rabbits look good to me at the time, and see what happens.

Onwards!