After my short post on what to leave out of a conference talk, here are few more positive thoughts on what to include, based on some of the SVPCA talks that really stayed with me.

First, Graeme Lloyd’s talk in the macroevolution symposium did a great job of explaining very complex concepts well (different ways of mapping morphospace onto phylogeny). It was a necessarily difficult talk to follow, and I did get lost a few times. But, crucially, Graeme offered lots of jump-back-on points, so I was never out of the game for more than a minute or so.

I think that concept of jumping-back-on points is important (albeit clumsily named). It’s easy, if someone is describing for example a detailed osteological point about a bone in basal tetrapods that doesn’t even exist in the animals we know and love, to tune out and lose the thread of the rest of the talk. There is an art in making it easy for people in this situation to tune back in. I’m not sure how it’s done: it might be more a matter of style than of content. I’ll think more on this one.

Also: several times as I watched Graeme’s talk, I internally raised an objection (such as low explanation-of-variation values of PC1 and PC2 in the plots he was showing) only for him to immediately go on to note the issue, and then explain how he deals with it. This should not be too difficult to emulate: anticipate possible objections and meet them in advance. This is something to have in mind when rehearsing your talk.

It was a talk that had obviously had a lot of work put into it. Another talk, which I shall not attribute, had very obviously been thrown together in 24 hours, which I think is flatly unacceptable. When you know you’re going to have a hundred professionals gathered in a room to listen to you, do the work to make it worth the audience’s while. Putting a talk together at the last minute is not a ninja move, or a mark of experience. It’s simple unprofessionalism.

Neil Brocklehurt’s talk was based on a taxon that was of very little interest to me: Milosaurus, and the pelycosaur-grade synapsid group to which it belongs. But his presentation was a textbook example of how to efficiently introduce a taxon and make it interesting before launching into details. There is almost certainly video out there somewhere — the SVPCA talks were filmed — and I recommend it highly when it becomes available. For fifteen glorious minutes, I was tricked into thinking that Carboniferous synapsids are fascinating. And it’s left me thinking that, hey, maybe they are interesting.

 

Advertisements

I deliberately left a lot of things out of the poster I presented at SVPCA: an abstract (who needs repetition?), institutional logos (who cares?), references (no-one’s going to follow them up that couldn’t find what they need in other ways), headings (all the text was in figure captions) and generally as much text as I could omit without compromising clarity.

In the same way, I found myself thinking a lot of the talks at his conference could have done with leaving some conventional things out — especially as talks now take place in 15-minute slots rather than 20 minutes.

Here are some things you don’t need to do:

  • Don’t start by saying the title. We can read it. Instead, while the title slide is up, tell us something about why we should care about your talk.
  • Don’t introduce yourself. It doesn’t matter if you’re in the last year of your Ph.D, or starting a postdoc with such-and-such person. We care about your science, not your biography (at least during your talk).
  • Don’t reiterate your conclusions at the end. We just heard them: if we can’t remember what you told us less than 15 minutes ago, we have bigger problems.
  • Don’t say “thanks for listening”. We’re here to listen to you. It’s why we came to the conference. You’re doing us a favour, not the other way around.
  • Don’t read the acknowledgements out loud. Nothing is more boring to listen to(*). Just leave the acknowledgements up on the screen as you finish, and we can read them if we’re interested.
  • Don’t say “I’ll be happy to take questions”. It’s the moderator’s job to invite questions — and indeed to judge whether there enough time.

Why omit these things? Most importantly, because they waste time, which you want to use to tell us your story. Your work is fascinating and we want to hear all about it. Do all you can to make space for it.

[See also: Tutorial 16: giving good talks (in four parts)]

 


(*) Except talks about mammal teeth, of course.

 

This is very belated, but back in the summer of 2014 I was approached to write a bunch of sections — all of them to do with dinosaurs, naturally — in the book Evolution: The Whole Story. I did seven group overviews (Dinosauria overview, prosauropods, sauropods, stegosaurs, ankylosaurs, marginocephalians, and hadrosaurs), having managed to hand the theropod work over to Darren.

My author copy arrived in February 2016 (which, yes, is over a year ago. Your point?) It’s really nice:

IMG_2046

And at 576 heavy, glossy pages, it’s a hefty tome.

IMG_2047

My contribution was fairly minimal, really: I provided about 35 pages. Darren wrote a lot more of it. Still, I’m pleased to have been involved. It’s nicely produced.

Here a sample spread — the first two of a four-page overview of sauropods, showing some nice illustrations and a typical timeline across the bottom of the page.

IMG_2048

And here’s one of the ten “highlights” sections I did, mostly on individual dinosaurs. This is the best one, of course, based on sheer taxon awesomeness, since it deals with Giraffatitan:

IMG_2049

Unfortunately, not all of the artwork is of this quality. For example, the life restoration that graces my spread on Argentinosaurus makes me want to stab my own eyes out:

IMG_2051

Still, putting it all together, this is an excellent book, providing a really helpful overview of the whole tree of life, each section written by experts. It’s selling for a frankly ludicrous £16.55 in the UK — it’s easily worth two or three times that; and $30.24 in the US is also excellent value.

Highly recommended, if I do say it myself.

It’s now been widely discussed that Jeffrey Beall’s list of predatory and questionable open-access publishers — Beall’s List for short — has suddenly and abruptly gone away. No-one really knows why, but there are rumblings that he has been hit with a legal threat that he doesn’t want to defend.

To get this out of the way: it’s always a bad thing when legal threats make information quietly disappear; to that extent, at least, Beall has my sympathy.

That said — over all, I think making Beall’s List was probably not a good thing to do in the first place, being an essentially negative approach, as opposed to DOAJ’s more constructive whitelisting approach. But under Beall’s sole stewardship it was a disaster, due to his well-known ideological opposition to all open access. So I think it’s a net win that the list is gone.

But, more than that, I would prefer that it not be replaced.

Researchers need to learn the very very basic research skills required to tell a real journal from a fake one. Giving them a blacklist or a whitelist only conceals the real issue, which is that you need those skills if you’re going to be a researcher.

Finally, and I’m sorry if this is harsh, I have very little sympathy with anyone who is caught by a predatory journal. Why would you be so stupid? How can you expect to have a future as a researcher if your critical thinking skills are that lame? Think Check Submit is all the guidance that anyone needs; and frankly much more than people really need.

Here is the only thing you need to know, in order to avoid predatory journals, whether open-access or subscription-based: if you are not already familiar with a journal — because it’s published research you respect, or colleagues who you respect have published in it or are on the editorial board — then do not submit your work to that journal.

It really is that simple.

So what should we do now Beall’s List has gone? Nothing. Don’t replace it. Just teach researchers how to do research. (And supervisors who are not doing that already are not doing their jobs.)

 

This one is for journalists and other popularizers of science. I see a lot of people writing that “scientists believe” this or that, when talking about hadrons or hadrosaurs or other phenomena grounded in evidence.

Pet peeve: believing is what people do in the absence of evidence, or despite evidence. Scientists often have to infer, estimate, and even speculate, but all of those activities are grounded in evidence and reason, not belief.1

In addition to doing science, scientists may also believe in the proper, spiritual sense, in which case you are free to explain what certain individual scientists believe. But that’s not how the word “believe” is used most of the time when it comes up in science stories.

So stop it. It’s lazy, and it’s damaging, because it gives (some) people the impression that scientists are clueless buffoons who make stuff up out of the whole cloth in a cynical bid to keep their jobs. Given that we have an entire political party pushing that view and trying to defund science and education at every turn, we don’t need that caricature promoted any further.

Even if you don’t accept that argument, it’s still bad writing. Good writing explains why people think as they do. So do that instead. In addition to the aforementioned “infer”, “estimate”, and “speculate”, you can use “surmise”, “reason”, “predict”, or – if you must – “think”. “Scientists have found” would be better still.

Best of all would be if the “scientists X” clause was preceded by, “Based on this evidence” (which you’ve just explained in the previous sentences), so readers can connect cause (evidence) and effect (scientists think) – which is what science is mostly about in the first place.

 

 

1. I realize that I am grossly oversimplifying – evidence, reason, and belief can interact in complicated ways in both spiritual and scientific spheres. But my purpose here is fixing poor word choice, not exploring that interaction.

Liem et al 2001 PPTs - intro slide

Functional Anatomy of the Vertebrates: An Evolutionary Perspective, by Liem et al. (2001), is by some distance my favorite comparative vertebrate anatomy text. When I was a n00b at Berkeley, Marvalee Wake assigned it to me as preparatory reading for my qualifying exams.

This scared me to death back then. Now I love it.

This scared me to death back then. Now I love it – sharkitecture!

The best textbooks, like Knut Schmidt-Nielsen’s Animal Physiology (which deserves a post or even series of its own sometime), have a clarity of writing and illustration that makes the fundamentals of life seem not only comprehensible, but almost inevitable – without losing sight of the fact that nature is complex and we don’t know everything yet. FAotV has both qualities, in spades.

Where vertebrae come from.

Where vertebrae come from. Liem et al. (2001: fig. 8.4).

I’m writing about this now because Willy Bemis, second author on FAotV, has just made ALL of the book’s illustrations available for free on his website, in a series of 22 PowerPoint files that correspond to the 22 chapters of the book. All told they add up to about 155 Mb, which is trivial – even the $5 jump drives in the checkout lanes at department stores have five to ten times as much space.

Aiiiieeee - a theropod! Aim for its head!

Aiiiieeee – a theropod! Aim for its head! Liem et al. (2001: fig. 8.17).

Of course, to get the full benefit you should also pick up a copy of the book. I see used copies going for under $40 in a lot of places online. Mine will have pride of place on my bookshelf until I enter the taphonomic lottery. And I’ll be raiding these PPTs for images from now until then, too.

Countercurrent gas exchange in fish gills - a very cool system.

Countercurrent gas exchange in fish gills – a very cool system. Liem et al. (2001: fig. 18.6).

So do the right thing, and go download this stuff, and use it. Be sure to credit Liem et al. (2001) for the images, and thank Willy Bemis for making them all available. It’s a huge gift to the field. Here’s that link again.

Liem et al 2001 PPTs - shark jaw and forelimb musculature

Dangit, if only there was a free online source for illustrations of shark anatomy… Liem et al (2001: fig. 10.12).

But wait – that’s not all! Starting on June 28, Dr. Bemis will be one of six faculty members from Cornell and the University of Queensland teaching a 4-week massively open online course (MOOC) on sharks. Freakin’ sharks, man!

“What did you do this summer? Hang out and play Nintendo?”

“Yep. Oh, and I also took a course on freakin’ sharks from some awesome shark experts. You?”

As the “massively open” part implies, the course is free, although you have the option of spending $49 to get a certificate of completion (assuming you finish satisfactorily). Go here to register or get more info.

Reference

  • Liem, K.F., Bemis, W.E., Walker, W.F., and Grande, L. 2001. Functional Anatomy of the Vertebrates. (3rd ed.). Thomson/Brooks Cole, Belmont, CA.

As a long-standing proponent of preprints, it bothers me that of all PeerJ’s preprints, by far the one that has had the most attention is Terrell et al. (2016)’s Gender bias in open source: Pull request acceptance of women versus men. Not helped by a misleading abstract, we’ve been getting headlines like these:

But in fact, as Kate Jeffrey points out in a comment on the preprint (emphasis added):

The study is nice but the data presentation, interpretation and discussion are very misleading. The introduction primes a clear expectation that women will be discriminated against while the data of course show the opposite. After a very large amount of data trawling, guided by a clear bias, you found a very small effect when the subjects were divided in two (insiders vs outsiders) and then in two again (gendered vs non-gendered). These manipulations (which some might call “p-hacking”) were not statistically compensated for. Furthermore, you present the fall in acceptance for women who are identified by gender, but don’t note that men who were identified also had a lower acceptance rate. In fact, the difference between men and women, which you have visually amplified by starting your y-axis at 60% (an egregious practice) is minuscule. The prominence given to this non-effect in the abstract, and the way this imposes an interpretation on the “gender bias” in your title, is therefore unwarranted.

And James Best, in another comment, explains:

Your most statistically significant results seem to be that […] reporting gender has a large negative effect on acceptance for all outsiders, male and female. These two main results should be in the abstract. In your abstract you really should not be making strong claims about this paper showing bias against women because it doesn’t. For the inside group it looks like the bias moderately favours women. For the outside group the biggest effect is the drop for both genders. You should hence be stating that it is difficult to understand the implications for bias in the outside group because it appears the main bias is against people with any gender vs people who are gender neutral.

Here is the key graph from the paper:

TerrellEtAl2016-fig5(The legends within the figure are tiny: on the Y-axes, they both read “acceptance rate”; and along the X-axis, from left to right, they read “Gender-Neutral”, “Gendered” and then again “Gender-Neutral”, “Gendered”.)

So James Best’s analysis is correct: the real finding of the study is a truly bizarre one, that disclosing your gender whatever that gender is reduces the chance of code being accepted. For “insiders” (members of the project team), the effect is slightly stronger for men; for “outsiders” it is rather stronger for women. (Note by the way that all the differences are much less than they appear, because the Y-axis runs from 60% to 90%, not 0% to 100%.)

Why didn’t the authors report this truly fascinating finding in their abstract? It’s difficult to know, but it’s hard not to at least wonder whether they felt that the story they told would get more attention than their actual findings — a feeling that has certainly been confirmed by sensationalist stories like Sexism is rampant among programmers on GitHub, researchers find (Yahoo Finance).

I can’t help but think of Alan Sokal’s conclusion on why his obviously fake paper in the physics of gender studies was accepted by Social Text:it flattered the editors’ ideological preconceptions“. It saddens me to think that there are people out there who actively want to believe that women are discriminated against, even in areas where the data says they are not. Folks, let’s not invent bad news.

Would this study have been published in its present form?

This is the big question. As noted, I am a big fan of preprints. But I think that the misleading reporting in the gender-bias paper would not make it through peer-review — as the many critical comments on the preprint certainly suggest. Had this paper taken a conventional route to publication, with pre-publication review, then I doubt we would now be seeing the present sequence of misleading headlines in respected venues, and the flood of gleeful “see-I-told-you” tweets.

(And what do those headlines and tweets achieve? One thing I am quite sure they will not do is encourage more women to start coding and contributing to open-source projects. Quite the opposite: any women taking these headlines at face value will surely be discouraged.)

So in this case, I think the fact that the study in its present form appeared on such an official-looking venue as PeerJ Preprints has contributed to the avalanche of unfortunate reporting. I don’t quite know what to do with that observation.

What’s for sure is that no-one comes out of this as winners: not GitHub, whose reputation has been wrongly slandered; not the authors, whose reporting has been shown to be misleading; not the media outlets who have leapt uncritically on a sensational story; not the tweeters who have spread alarm and despondancy; not PeerJ Preprints, which has unwittingly lent a veneer of authority to this car-crash. And most of all, not the women who will now be discouraged from contributing to open-source projects.