arborization of science

Modified from an original SEM image of branching blood vessels, borrowed from http://blogs.uoregon.edu/artofnature/2013/12/03/fractal-of-the-week-blood-vessels/.

I was reading a rant on another site about how pretentious it is for intellectuals and pseudo-intellectuals to tell the world about their “media diets” and it got me thinking–well, angsting–about my scientific media diet.

And then almost immediately I thought, “Hey, what am I afraid of? I should just go tell the truth about this.”

And that truth is this: I can’t tell you what forms of scientific media I keep up with, because I don’t feel like I am actually keeping up with any of them.

Papers – I have no systematic method of finding them. I don’t subscribe to any notifications or table of contents updates. Nor, to be honest, am I in the habit of regularly combing the tables of contents of any journals.

Blogs – I don’t follow any in a timely fashion, although I do check in with TetZoo, Laelaps, and a couple of others every month or two. Way back when we started SV-POW!, we made a command decision not to list any sites other than our own on the sideboard. At the time, that was because we didn’t want to have any hurt feelings or drama over who we did and didn’t include. But over time, a strong secondary motive to keep things this way is that we’re not forced to keep up with the whole paleo blogosphere, which long ago outstripped my capacity to even competently survey. Fortunately, those overachievers at Love in the Time of Chasmosaurs have a pretty exhaustive-looking set of links on their sidebar, so globally speaking, someone is already on that.

The contraction in my blog reading is a fairly recent thing. When TetZoo was on ScienceBlogs, I was over there all the time, and there were probably half a dozen SciBlogs that I followed pretty regularly and another dozen or so that I at least kept tabs on. But ScienceBlogs burned down the community I was interested in, and the Scientific American Blog Network is sufficiently ugly (in the UI sense) and reader-unfriendly to not be worth my dealing with it. So I am currently between blog networks–or maybe past my last one.

Social Media – I’m not on Twitter, and I tend to only log into Facebook when I get an interesting notice in my Gmail “Social” folder. Sometimes I’m not on FB for a week or two at a time. So I miss a lot of stuff that goes down there, including notices about new papers. I could probably fix that if I just followed Andy Farke more religiously.

What ends up happening – I mainly find papers relevant to specific projects as I execute those projects; each new project is a new front in my n-dimensional invasion of the literature. My concern is that in doing this, I tend to find the papers that I’m looking for, whereas the papers that have had the most transformative effect on me are the ones I was not looking for at the time.

Beyond that, I find out about new papers because the authors take it on themselves to include me when they email the PDF out to a list of potentially interested colleagues (and many thanks to all of you who are doing that!), or Mike, Darren, or Andy send it to me, or it turns up in the updates to my Google Scholar profile.

So far, this combination of ad hoc and half-assed methods seems to be working, although it does mean that I have unfairly outsourced much of my paper discovery to other people without doing much for them in return. When I say that it’s working, I mean that I don’t get review comments pointing out that I have missed important recent papers. I do get review comments saying that I need to cite more stuff,* but these tend to be papers that I already know of and maybe even cited already, just not in the right ways to satisfy the reviewers.**

* There is a sort of an arrow-of-inevitability thing here, in that reviewers almost always ask you to cite more papers rather than fewer. Only once ever have I been asked to cite fewer sources, and that is when I had submitted my dinosaur nerve paper (Wedel 2012) to a certain nameless anatomy journal that ended up not publishing it. One of the reviewers said that I had cited several textbooks and popular science books and that was poor practice, I should have cited primary literature. Apparently this subgenius did not realize that I was citing all of those popular sources as examples of publications that held up the recurrent laryngeal nerve of giraffes as evidence for evolution, which was part of the point that I was making: giraffe RLNs are overrated.

** My usual sin is that I mentally categorize papers in one or two holes and forget that a given paper also mentioned C and D in addition to saying a lot about A and B. It’s something that vexes me about some of my own papers. I put so much stuff into the second Sauroposeidon paper (Wedel et al. 2000b) that some it has never been cited–although that paper has been cited plenty, it often does not come up in discussions where some of the data presented therein is relevant, I think because there’s just too much stuff in that paper for anyone (who cares about that paper less than I do) to hold in their heads. But that’s a problem to be explored in another post.

The arborization of science

Part of the problem with keeping up with the literature is just that there is so much more of it than there was even a few years ago. When I first got interested in sauropod pneumaticity back in the late 90s, you were pretty much up to speed if you’d read about half a dozen papers:

  • Seeley (1870), who first described pneumaticity in sauropods as such, even if he didn’t know what sauropods were yet;
  • Longman (1933), who first realized that sauropod vertebrae could be sorted into two bins based on their internal structures, which are crudely I-beam-shaped or honeycombed;
  • Janensch (1947), who wrote the first ever paper that was primarily about pneumaticity in dinosaurs;
  • Britt (1993), who first CTed dinosaur bones looking for pneumaticity, independently rediscovered Longman’s two categories, calling them ‘camerate’ and ‘camellate’ respectively, and generally put the whole investigation of dinosaur pneumaticity on its modern footing;
  • Witmer (1997), who provided what I think is the first compelling explanation of how and why skeletal pneumaticity works the way it does, using a vast amount of evidence culled from both living and fossil systems;
  • Wilson (1999), who IIRC was the first to seriously discuss the interplay of pneumaticity and biomechanics in determining the form of sauropod vertebrae.

Yeah, there you go: up until the year 2000, you could learn pretty much everything important that had been published on pneumaticity in dinosaurs by reading five papers and one dissertation. “Dinosaur pneumaticity” wasn’t a field yet. It feels like it is becoming one now. To get up to speed today, in addition to the above you’d need to read big swaths of the work of Roger Benson, Richard Butler, Leon Claessens, Pat O’Connor (including a growing body of work by his students), Emma Schachner (not on pneumaticity per se, but too closely related [and too awesome] to ignore), Daniela Schwarz, and Jeff Wilson (and his students), plus important singleton papers like Woodward and Lehman (2009), Cerda et al. (2012), Yates et al. (2012), and Fanti et al. (2013). Not to mention my own work, and some of Mike’s and Darren’s. And Andy Farke and the rest of Witmer, if you’re into cranial pneumaticity. And still others if you care about pneumaticity in pterosaurs, which you should if you want to understand how–and, crucially, when–the anatomical underpinnings of ornithodiran pneumaticity evolved. Plus undoubtedly some I’ve forgotten–apologies in advance to the slighted, please prod me in the comments.

You see? If I actually listed all of the relevant papers by just the authors I named above, it would probably run to 50 or so papers. So someone trying to really come to grips with dinosaur pneumaticity now faces a task roughly equal to the one I faced in 1996 when I was first trying to grokk sauropods. This is dim memory combined with lots of guesswork and handwaving, but I probably had to read about 50 papers on sauropods before I felt like I really knew the group. Heck, I read about a dozen on blood pressure alone.

(Note to self: this is probably a good argument for writing a review paper on dinosaur pneumaticity, possibly in collaboration with some of the folks mentioned above–sort of a McIntosh [1990] for the next generation.)

When I wrote the first draft of this post, I was casting about for a word to describe what is going on in science, and the first one that came to mind is “fragmentation”. But that’s not the right word–science isn’t getting more fragmented. If anything, it’s getting more interconnected. What it’s really doing is arborizing–branching fractally, like the blood vessels in the image at the top of this post. I think it’s pointless to opine about whether this is a good or bad thing. Like the existence of black holes and fuzzy ornithischians, it’s just a fact now, and we’d better get on with trying to make progress in this new reality.

How do I feel about all this, now that my little capillary of science has grown into an arteriole and threatens to become a full-blown artery? It is simultaneously exhilarating and worrying. Exhilarating because lots of people are discovering lots of cool stuff about my favorite system, and I have a lot more people to bounce ideas around with than I did when I started. Worrying because I feel like I am gradually losing my ability to keep tabs on the whole thing. Sound familiar?

Conclusion: Help a brother out

Having admitted all of this, it seems imperative that I get my act together and establish some kind of systematic new-paper-discovery method, beyond just sponging off my friends and hoping that they’ll continue to deliver everything I need. But it seems inevitable that I am either going to have to be come more selective about what I consume–which sounds both stupid and depressing–or lose all of my time just trying to keep up with things.

Hi, I’m Matt. I just arrived here in Toomuchnewscienceistan. How do you find your way around?

References

If the internet has any underlying monomyth, or universally shared common ground, or absolute rule, it is this:

People love to see the underdog win.

This rule has a corollary:

When you try to censor someone, they automatically become the underdog.

I say “try to censor” someone, because on the internet that is remarkably difficult to achieve. I’m not going to argue that the attention paid to the range of stories told on the internet is fairly distributed–being published is not the same as being read, and people seem to prefer cat pictures to reading about genocide. But it’s awfully hard to shut someone up, and any attempt to do so may backfire spectacularly.

If you work for an organization of any size, or have amassed any considerable power, reputation, or influence personally, you need to keep that at the forefront of your mind in every interaction you ever have with anyone, anywhere, ever. The reason for this constant attention is to keep you from becoming the overdog and thereby making an ass of yourself (and your organization, if you belong to one). Go read about the Streisand Effect and think proactively about how to keep that from happening to you.

Now, for the purposes of this tutorial I am going to arbitrarily sort the full range of possible messages into four bins:

  1. Those that make the teller look good.
  2. Those that make the teller look bad.
  3. Those that make someone else look good.
  4. Those that make someone else look bad.

Two and three are dead easy and often go hand in hand. If you want to spread messages of that type, all you have to do is find someone with less power, reputation, or influence–a prospective underdog, in other words–and be a jerk to them, thus turning them into an actual underdog. Coercion, threats, employment termination–these are all pretty good and may eventually pay off. But if you really want to look like a complete tit, and make the other party an instant hero, you gotta go for censorship. Out here in bitspace, it is the ne plus ultra of suicidal moves. It’s like Chuck Norris winding up for a roundhouse kick to someone’s face, only somehow his foot misses the other person’s face and hits him right in the junk instead. We will click and tap on that until they pry the mice and touchscreens from our cold, dead hands.

The first one–positive messages about yourself–is tricky. You can’t just go around telling people that you’re awesome. Anyone with any sense will suspect advertising. The only sure-fire method I know of is to do good work where people can see it. One thing you will just have to accept is that reputations are slow-growing but fast-burning. So, again, try to avoid burning yours down.

The last one–making someone else look bad–is also surprisingly tricky. If you just broadcast negatives to the world, that will probably backfire. At the very least, people start thinking of you as a negative force rather than a positive one. If the person you want to make look bad has ever lied or falsified data or oppressed anyone, use that. If they’ve ever tried to censor someone, or are actively trying to censor you, rejoice, they’ve done most of the work for you.

The upside of that last one is that, provided you’re not actively nasty, it is hard for others to hurt your reputation. If they just spew vitriol, it will probably backfire. If they lie about you, it will definitely backfire. About the only way to really trash your reputation is through your own actions. Your fate is in your own hands.

———–

So, this is transparently a meditation on the DNLee/Biology Online/Scientific American story.

I would really like to know the backstory. Did someone at Biology Online contact SciAm and ask them to take down DNLee’s post? If so, well, geez, that was stupid. Why does anyone ever expect this to work anymore? I mean, the actual event from which the Streisand Effect got its name happened a decade ago, which may seem short in human terms but is an eternity online (it’s two-thirds of the lifespan to date of Google, for example).

If someone at SciAm did it unilaterally to protect their valued financial partner, it was doubly stupid, because not only did the censorship act itself fail, but now people like me are wondering if Biology Online asked for that “protection”. In other words, people are now suspecting Biology Online of something they might not have even done (although what they did do–what their employee did on their behalf, which amounts to the same thing–was bad enough).

So all in all the affair is like a tutorial on how to royally cock things up on the internet. And in fact it continues to be–Mariette DiChristina’s “apology” is a classic non-apology, that uses a torrent of words to say very little. Her self-contradictory tweets are much more revealing, despite being under 140 characters each. And in fact her loudest message is the complete lack of communication with DNLee before she pulled the post. So meaning scales inversely with message length for DiChristina–not a great quality in an Editor-In-Chief. And, OMG does she need to learn about the Asoh defense.

In the end, the whole thing just saddens me. I’m sad that SciAm made the wrong call immediately and reflexively. It says to me that they don’t care about transparency or integrity. They may say otherwise, but they are belied by their actions.

I’m sad that, having not even known that Biology Online exists, my perception of them now starts from a position of, “Oh, the ones that called that science writer a whore.” (If you’re a BO fan, please don’t write in to tell me how wonderful BO actually is; doing so is just admitting that you didn’t read this post.)

I’m sad that this happened to DNLee. I hope that going forward her reputation is determined by the quality of her work and the integrity of her actions, and not by words and circumstances inflicted on her by others.

… I wonder if I could make it as a corporate consultant if I put on a suit, walked into rooms full of pointy-haired bosses, and just explained the Streisand Effect and the Asoh Defense as if they were novel insights. I’ll bet I could make a killing.

Suppose, hypothetically, that you worked for an organisation whose nominal goal is the advancement of science, but which has mutated into a highly profitable subscription-based publisher. And suppose you wanted to construct a study that showed the alternative — open-access publishing — is inferior.

What would you do?

You might decide that a good way to test publishers is by sending them an obviously flawed paper and seeing whether their peer-review weeds it out.

But you wouldn’t want to risk showing up subscription publishers. So the first thing you’d do is decide up front not to send your flawed paper to any subscription journals. You might justify this by saying something like “the turnaround time for traditional journals is usually months and sometimes more than a year. How could I ever pull off a representative sample?“.

Next, you’d need to choose a set of open-access journals to send it to. At this point, you would carefully avoid consulting the membership list of the Open Access Scholarly Publishers Association, since that list has specific criteria and members have to adhere to a code of conduct. You don’t want the good open-access journals — they won’t give you the result you want.

Instead, you would draw your list of publishers from the much broader Directory of Open Access Journals, since that started out as a catalogue rather than a whitelist. (That’s changing, and journals are now being cut from the list faster than they’re being added, but lots of old entries are still in place.)

Then, to help remove many of the publishers that are in the game only to advance research, you’d trim out all the journals that don’t levy an article processing charge.

But the resulting list might still have an inconveniently high proportion of quality journals. So you would bring down the quality by adding in known-bad publishers from Beall’s list of predatory open-access publishers.

Having established your sample, you’d then send the fake papers, wait for the journals’ responses, and gather your results.

To make sure you get a good, impressive result that will have a lot of “impact”, you might find it necessary to discard some inconvenient data points, omitting from the results some open-access journals that rejected the paper.

Now you have your results, it’s time to spin them. Use sweeping, unsupported generalisations like “Most of the players are murky. The identity and location of the journals’ editors, as well as the financial workings of their publishers, are often purposefully obscured.”

Suppose you have a quote from the scientist whose experiences triggered the whole project, and he said something inconvenient like “If [you] had targeted traditional, subscription-based journals, I strongly suspect you would get the same result”. Just rewrite it to say “if you had targeted the bottom tier of traditional, subscription-based journals”.

Now you have the results you want — but how will you ever get through through peer-review, when your bias is so obvious? Simple: don’t submit your article for peer-review at all. Classify it as journalism, so you don’t need to go through review, nor to get ethical approval for the enormous amount of editors’ and reviewers’ time you’ve wasted — but publish it in a journal that’s known internationally for peer-reviewed research, so that uncritical journalists will leap to your favoured conclusion.

Last but not least, write a press-release that casts the whole study as being about the “Wild West” of Open-Access Publishing.

Everyone reading this will, I am sure, have recognised that I’m talking about  John Bohannon’s “sting operation” in Science. Bohannon has a Ph.D. in molecular biology from Oxford University, so we would hope he’d know what actual science looks like, and that this study is not it.

Of course, the problem is that he does know what science looks like, and he’s made the “sting” operation look like it. It has that sciencey quality. It discusses methods. It has supplementary information. It talks a lot about peer-review, that staple of science. But none of that makes it science. It’s a maze of preordained outcomes, multiple levels of biased selection, cherry-picked data and spin-ridden conclusions. What it shows is: predatory journals are predatory. That’s not news.

Speculating about motives is always error-prone, of course, but it it’s hard not to think that Science‘s goal in all this was to discredit open-access publishing — just as legacy publishers have been doing ever since they realised OA was real competition. If that was their goal, it’s misfired badly. It’s Science‘s credibility that’s been compromised.

Update (9 October)

Akbar Khan points out yet more problems with Bohannon’s work: mistakes in attributing where given journals were listed, DOAJ or Beall’s list. As a result, the sample may be more, or less, biased than Bohannon reported.

 

 

 

For a palaeontology blog, we don’t talk a lot about geology. Time to fix that, courtesy of my middle son Matthew, currently 13 years old, who made this helpful guide to the rock cycle as Geology homework.

rocky1

rocky2

rocky3

As the conference season heaves into view again, I thought it was worth gathering all four parts of the old Tutorial 16 (“giving good talks”) into one place, so it’s easy to link to. So here they are:

  • Part 1: Planning: finding a narrative
    • Make us care about your project.
    • Tell us a story.
    • You won’t be able to talk about everything you’ve done this year.
    • Omit much that is relevant.
    • Pick a single narrative.
    • Ruthlessly prune.
    • [You want to end up with] a structure that begins at the beginning, tells a single coherent story from beginning to end, and then stops.
  • Part 2: The slides: presenting your information to be understood
    • Make yourself understood.
    • The slides for a conference talk are science, not art.
    • Don’t “frame” your content.
    • Whatever you’re showing us, let us see it.
    • Use as little text as possible.
    • Use big fonts.
    • Use high contrast between the text and background.
    • No vertical writing.
    • Avoid elaborate fonts.
    • Pick a single font.
    • Stick to standard fonts.
    • You might want to avoid Ariel.
    • Do not use MS Comic Sans Serif.
    • Use a consistent colour palette.
    • Avoid putting important information at the bottom.
    • Avoid hatching.
    • Skip the fancy slide transitions.
    • Draw highlighting marks on your slides.
    • Show us specimens!
  • Part 3: Rehearsal: honing the story and how it’s told
    • Fit into the time.
    • Become fluent in delivery.
    • Maintain flow and momentum.
    • Decide what to cut
    • Get feedback
  • Part 4: Delivery: telling the story
    • Speak up!
    • Slow down!
    • Don’t panic!

Enjoy!

It turns out that G. K. Chesterton conveniently summarised all of my advice on slide preparation more than a century ago:

This is the sort of [slides] we like
(For you and I are very small),
With pictures stuck in anyhow,
And hardly any words at all.

You will not understand a word
Of all the words, including mine;
Never you trouble; you can see,
And all directness is divine.

Stand up and keep your childishness:
Read all the pedants’ screeds and strictures;
But don’t believe in anything
       That can’t be told in coloured pictures.

(Inscribed in the front of a child’s picture book, around 1906.)

It’s five to ten on Saturday night. Matt and I are in New York City. We could be at the all-you-can-eat sushi buffet a couple of blocks down from our hotel, or watching a film, or doing all sorts of cool stuff.

Instead, we’re in our hotel room. Matt is reformatting the bibliography of our neck-elongation manuscript, and I am tweaking the format of the citations.

Just sayin’.

Follow

Get every new post delivered to your Inbox.

Join 380 other followers