I recently discovered the blog Slime Mold Time Mold, which is largely about the science of obesity — a matter of more than academic interest to me, and if I may say to, to Matt.

I discovered SMTM through its fascinating discussions of scurvy and citrus-fruit taxonomy. But what’s really been absorbing me recently is a series of twenty long, detailed posts under the banner “A Chemical Hunger“, in which the author contests that the principle cause of the modern obesity epidemic is chemically-induced changes to the “lipostat” that tells our bodies what level of mass to maintain.

I highly recommend that you read the first post in this series, “Mysteries“, and see what you think. If you want to read on after that, fine; but even if you stop there, you’ll still have read something fascinating, counter-intuitive, well referenced and (I think) pretty convincing.

Anyway. The post that fascinates me right now is one of the digressions: “Interlude B: The Nutrient Sludge Diet“. In this post, the author tells us about “a 1965 study in which volunteers received all their food from a ‘feeding machine’ that pumped a ‘liquid formula diet’ through a ‘dispensing syringe-type pump which delivers a predetermined volume of formula through the mouthpiece'”, but they were at liberty to choose how many hits of this neutral-tasting sludge they took.

This study had an absolutely sensational outcome: among the participants with healthy body-weight, the amount of nutrient sludge that they chose to feed themselves was almost exactly equal in caloric content to their diets before the experiment. But the grossly obese participants (weighing about 400 lb = 180 kg), chose to feed themselves a tiny proportion of their usual intake — about one tenth — and lost an astonishing amount of weight. All without feeling hunger.

Please do read the Slime Mold Time Mold write-up for the details. But I will let you in right now on the study’s very very significant flaw. The sample-size was two. That is, two obese participants, plus a control-group of two healthy-weight individuals. And clearly whatever conclusion we can draw from a study of that size is merely anecdotal, having no statistical power worth mentioning.

And now we come to the truly astonishing part of this. It seems no-one has tried to replicate this study with a decent-sized sample. The blog says:

If this works, why hasn’t someone replicated it by now? It would be pretty easy to run a RCT where you fed more than five obese people nutrient sludge ad libitum for a couple weeks, so this means either it doesn’t work as described, or it does work and for some reason no one has tried it. Given how huge the rewards for this finding would be, we’re going to go with the “it doesn’t work” explanation.

In a comment, I asked:

OK, I’ll bite. Why hasn’t anyone tried to replicate the astounding and potentially valuable findings of these studies? It beggars belief that it’s not been tried, and multiple times. Do you think it has been tried, but the results weren’t published because they were unimpressive? That would be an appalling waste.

The blog author replied:

Our guess is that it simple hasn’t been tried! Academia likes to pretend that research is one-and-done, and rarely checks things once they’re in the literature. We agree, someone should try to replicate!

I’m sort of at a loss for words here. How can it possibly be that, 58 years after a pilot study that potentially offers a silver bullet to the problem of obesity, no-one has bothered to check whether it works? I mean, the initial study is so old that Revolver hadn’t been released. Yet it seems to have just lain there, unloved, as the Beatles moved on through Sergeant Pepper, the White Album, Abbey Road et al., broke up, pursued their various solo projects, died (50% of the sample) and watched popular music devolve into whatever the heck it is now.

Why aren’t obesity researchers all over this?

We’ve noted many times over the years how inconsistent pneumatic features are in sauropod vertebra. Fossae and formamina vary between individuals of the same species, and along the spinal column, and even between the sides of individual vertebrae. Here’s an example that we touched on in Wedel and Taylor (2013), but which is seen in all its glory here:

Taylor and Wedel (2021: Figure 5). Giraffatitan brancai tail MB.R.5000, part of the mounted skeleton at the Museum für Naturkunde Berlin. Caudal vertebrae 24–26 in left lateral view. While caudal 26 has no pneumatic features, caudal 25 has two distinct pneumatic fossae, likely excavated around two distinct vascular foramina carrying an artery and a vein. Caudal 24 is more shallowly excavated than 25, but may also exhibit two separate fossae.

But bone is usually the least variable material in the vertebrate body. Muscles vary more, nerves more again, and blood vessels most of all. So why are the vertebrae of sauropods so much more variable than other bones?

Our new paper, published today (Taylor and Wedel 2021) proposes an answer! Please read it for the details, but here’s the summary:

  • Early in ontogenly, the blood supply to vertebrae comes from arteries that initially served the spinal cord, penetrating the bone of the neural canal.
  • Later in ontegeny, additional arteries penetrate the centra, leaving vascular foramina (small holes carrying blood vessels).
  • This hand-off does not always run to completion, due to the variability of blood vessels.
  • In extant birds, when pneumatic diverticula enter the bone they do so via vascular foramina, alongside blood vessels.
  • The same was probaby true in sauropods.
  • So in vertebrae that got all their blood supply from vascular foramina in the neural canal, diverticula were unable to enter the centra from the outside.
  • So those centra were never pneumatized from the outside, and no externally visible pneumatic cavities were formed.

Somehow that pretty straightforward argument ended up running to eleven pages. I guess that’s what you get when you reference your thoughts thoroughly, illustrate them in detail, and discuss the implications. But the heart of the paper is that little bullet-list.

Taylor and Wedel (2021: Figure 6). Domestic duck Anas platyrhynchos, dorsal vertebrae 2–7 in left lateral view. Note that the two anteriormost vertebrae (D2 and D3) each have a shallow pneumatic fossa penetrated by numerous small foramina.

(What is the relevance of these duck dorsals? You will need to read the discussion in the paper to find out!)

Our choice of publication venue

The world moves fast. It’s strange to think that only eleven years ago my Brachiosaurus revision (Taylor 2009) was in the Journal of Vertebrate Palaeontology, a journal that now feels very retro. Since then, Matt and I have both published several times in PeerJ, which we love. More recently, we’ve been posting preprints of our papers — and indeed I have three papers stalled in peer-review revisions that are all available as preprints (two Taylor and Wedels and a single sole-authored one). But this time we’re pushing on even further into the Shiny Digital Future.

We’ve published at Qeios. (It’s pronounced “chaos”, but the site doesn’t tell you that; I discovered it on Twitter.) If you’ve not heard of it — I was only very vaguely aware of it myself until this evening — it runs on the same model as the better known F1000 Research, with this very important difference: it’s free. Also, it looks rather slicker.

That model is: publish first, then filter. This is the opposite of the traditional scholarly publishing flow where you filter first — by peer reviewers erecting a series of obstacles to getting your work out — and only after negotiating that course to do get to see your work published. At Qeios, you go right ahead and publish: it’s available right off the bat, but clearly marked as awaiting peer-review:

And then it undergoes review. Who reviews it? Anyone! Ideally, of course, people with some expertise in the relevant fields. We can then post any number of revised versions in response to the reviews — each revision having its own DOI and being fixed and permanent.

How will this work out? We don’t know. It is, in part, an experiment. What will make it work — what will impute credibility to our paper — is good, solid reviews. So if you have any relevant expertise, we do invite you to get over there and write a review.

And finally …

Matt noted that I first sent him the link to the Qeios site at 7:44 pm my time. I think that was the first time he’d heard of it. He and I had plenty of back and forth on where to publish this paper before I pushed on and did it at Qeios. And I tweeted that our paper was available for review at 8:44 — one hour exactly after Matt learned that the venue existed. Now here we are at 12:04 my time, three hours and 20 minutes later, and it’s already been viewed 126 times and downloaded 60 times. I think that’s pretty awesome.

References

  • Taylor, Michael P. 2009. A re-evaluation of Brachiosaurus altithorax Riggs 1903 (Dinosauria, Sauropoda) and its generic separation from Giraffatitan brancai (Janensch 1914). Journal of Vertebrate Paleontology 29(3):787-806. [PDF]
  • Taylor, Michael P., and Mathew J. Wedel. 2021. Why is vertebral pneumaticity in sauropod dinosaurs so variable? Qeios 1G6J3Q. doi: 10.32388/1G6J3Q [PDF]
  • Wedel, Mathew J., and Michael P. Taylor 2013b. Caudal pneumaticity and pneumatic hiatuses in the sauropod dinosaurs Giraffatitan and Apatosaurus. PLOS ONE 8(10):e78213. 14 pages. doi: 10.1371/journal.pone.0078213 [PDF]

Down in flames

August 25, 2018

I first encountered Larry Niven’s story/essay “Down in Flames” in the collection N-Space in high school. This was after I’d read Ringworld and most of Niven’s Known Space stories, so by the time I got to “Down in Flames” I had the context to get it. (You can read the whole thing for free here.)

Here’s the idea, from near the start:

On January 14, 1968, Norman Spinrad and I were at a party thrown by Tom & Terry Pinckard. We were filling coffee cups when Spinny started this whole thing.

“You ought to drop the known space series,” he said. “You’ll get stale.” (Quotes are not necessarily dead accurate.) I explained that I was writing stories outside the “known space” history, and that I would give up the series as soon as I ran out of things to say within its framework. Which would be soon.

“Then why don’t you write a novel that tears it to shreds? Don’t just abandon known space. Destroy it!”

“But how?” (I never asked why. Norman and I think alike in some ways.)

The rest of the piece is just working out the details.

“Down in Flames” brain-wormed me. Other than Ray Bradbury’s “A Sound of Thunder” I doubt if there is another short story I’ve read as many times. Mike once described the act of building something complex and beautiful and then destroying it as “magnificently profligate”, and that’s the exact quality of “Down in Flames” that appeals to me.

I also think it is a terrific* exercise for everyone who is a scientist, or who aspires to be one.

* In both the modern sense of “wonderful” and the archaic sense of “causing terror”.

Seriously, try it. Grab a piece of paper (or open a new doc, or whatever) and write down the ideas you’ve had that you hold most dear. And then imagine what it would take for all of them to be wrong. (When teams and organizations do this for their own futures, it’s called a pre-mortem, and there’s a whole managerially-oriented literature on it. I’d read “Down in Flames” instead.)

It feels like this! Borrowed from here.

Here are some questions to help you along:

  • Which of your chains of reasoning admit more than one end-point? If none of them might lead other places, then either you are the most amazing genius of all time (even Newton and Einstein made mistakes), or you are way behind the cutting edge, and your apparent flawlessness comes from working on things that are already settled.
  • If there is a line of evidence that could potentially falsify your pet hypothesis, have you checked it? Have you drawn any attention to it? Or have you gracefully elided it from your discussions in hopes that no-one will notice, at least until after you’re dead?
  • If there’s no line of evidence that could falsify your pet hypothesis, are you actually doing science?
  • Which of your own hypotheses do you have an emotional investment in?
  • Are there findings from a rival research team (real or imagined) that you would not be happy to see published, if they were accurate?
  • Which hypotheses do you not agree with, that you would be most dismayed to see proven correct?

[And yes, Karl, I know that according to some pedants hypotheses are never ‘proven’. It’s a theoretical exercise already, so just pretend they can be!]

I’ll close with one of my favorite quotes, originally published in a couple of tweets by Angus Johnson in May of 2017 (also archived here):

If skepticism means anything it means skepticism about the things you WANT to be true. It’s easy to be a skeptic about others’ views. Embracing a set of claims just because it happens to fit your priors doesn’t make you a skeptic. It makes you a rube, a mark, a schnook.

So, don’t be that rube. Burn down your house of ideas – or at least, mentally sift through the rubble and ashes and imagine how it might have burned down. And then be honest about that, minimally with yourself, and ideally with the world.

If you’re a true intellectual badass, blog the results. I will. It’s not fair to give you all homework – painful homework – and not take the medicine myself, so I’m going to do a “Down in Flames” on my whole oeuvre in the next a future post. Stay tuned!

I’ve been on Twitter since April 2011 — nearly six years. A few weeks ago, for the first time, something I tweeted broke the thousand-retweets barrier. And I am really unhappy about it. For two reasons.

First, it’s not my own content — it’s a screen-shot of Table 1 from Edwards and Roy (2017):

c49rdmlweaaa4if

And second, it’s so darned depressing.

The problem is a well-known one, and indeed one we have discussed here before: as soon as you try to measure how well people are doing, they will switch to optimising for whatever you’re measuring, rather than putting their best efforts into actually doing good work.

In fact, this phenomenon is so very well known and understood that it’s been given at least three different names by different people:

  • Goodhart’s Law is most succinct: “When a measure becomes a target, it ceases to be a good measure.”
  • Campbell’s Law is the most explicit: “The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.”
  • The Cobra Effect refers to the way that measures taken to improve a situation can directly make it worse.

As I say, this is well known. There’s even a term for it in social theory: reflexivity. And yet we persist in doing idiot things that can only possibly have this result:

  • Assessing school-teachers on the improvement their kids show in tests between the start and end of the year (which obviously results in their doing all they can depress the start-of-year tests).
  • Assessing researchers by the number of their papers (which can only result in slicing into minimal publishable units).
  • Assessing them — heaven help us — on the impact factors of the journals their papers appear in (which feeds the brand-name fetish that is crippling scholarly communication).
  • Assessing researchers on whether their experiments are “successful”, i.e. whether they find statistically significant results (which inevitably results in p-hacking and HARKing).

What’s the solution, then?

I’ve been reading the excellent blog of economist Tim Harford, for a while. That arose from reading his even more excellent book The Undercover Economist (Harford 2007), which gave me a crash-course in the basics of how economies work, how markets help, how they can go wrong, and much more. I really can’t say enough good things about this book: it’s one of those that I feel everyone should read, because the issues are so important and pervasive, and Harford’s explanations are so clear.

In a recent post, Why central bankers shouldn’t have skin in the game, he makes this point:

The basic principle for any incentive scheme is this: can you measure everything that matters? If you can’t, then high-powered financial incentives will simply produce short-sightedness, narrow-mindedness or outright fraud. If a job is complex, multifaceted and involves subtle trade-offs, the best approach is to hire good people, pay them the going rate and tell them to do the job to the best of their ability.

I think that last part is pretty much how academia used to be run a few decades ago. Now I don’t want to get all misty-eyed and rose-tinted and nostalgic — especially since I wasn’t even involved in academia back then, and don’t know from experience what it was like. But could it be … could it possibly be … that the best way to get good research and publications out of scholars is to hire good people, pay them the going rate and tell them to do the job to the best of their ability?

[Read on to Why do we manage academia so badly?]

References

Bonus

Here is a nicely formatted full-page version of the Edwards and Roy table, for you to print out and stick on all the walls of your university. My thanks to David Roberts for preparing it.

Many thanks to everyone who played pin-the-skull-on-the-carnivore. The answers are down at the bottom of this post, so if you’ve just arrived here and want to take the challenge, go here before you scroll down.

To fill up some space, let me point out how crazy variable the skulls of black bears, Ursus americanus, are.

My bear skull - left lateral reversed

Here’s the one I helped dig up, missing the occipital region. Note the double inflection in the dorsal outline that separates the forehead from both the snout and top of the head, and the way the nasal bones stick out at a very different angle from the maxilla.

Page Museum black bear skull

Here’s the skull of a black bear from the La Brea tar pits, in the Page Museum in L.A. I don’t know if this one was female or juvenile or what, but the dorsal margin of the skull is one mostly-smooth curve from occiput almost to incisors, with the nasals scarcely deviating at all. Lest you think these differences were caused by evolutionary change rather than intraspecific variation, similar “roundhead” bear skulls from modern times are here and here and near the bottom of this page.

It’s this variability that first got me thinking about doing the Carnivore Skull Challenge. I saw a couple of photos of skulls of wolverines, and except for having carnassial cheek teeth instead of flatter premolars and molars, the wolverine skulls look like they could fit right into the span of black bear skull variability (in shape; obviously they’re not nearly as big). Then I saw a hyena skull and thought that it wasn’t that far off from a wolverine either. A little more searching for plausible distractors and I was all set.

Here are the answers, by the way:

Carnivore skull challenge - answersIt’s kind of ironic, then, that the first two people to venture identifications picked out the black bear right away. In the very first comment, Dean got it almost all right except for swapping the seal and the fossa. Dean was also the first to get all of the skulls correctly identified, albeit on his second pass. Markus Bühler (of Cthulhu-sculpting fame) was the first to get them all the first time. Tom Nutter, our own Darren Naish, and microecos Neil also aced the test, although in light of the Page Museum bear skull shown above, I was amused to see Darren’s “D: Bear. Because forehead.” I guess it’s one of those presence-of-forehead-means-bear, absence-of-forehead-does-not-rule-out-bear things that logicians are always going on about.

I was really happy to see people getting the wolverine and hyena mixed up, because they really do look strikingly similar to me. It’s almost like hyena + bear = wolverine.

Brian Engh asked on Facebook when I was going to do one for sauropods. Patience, good sir! It’s on my to-do list.

Finally, speaking of bear skulls, you can get a sweet tiny bronze one with a hinged jaw as part of this already successful Kickstarter, or from Fireandbone.com once the Kickstarter ends.

Something very different, and very unexpected, tomorrow.

Carnivore skull challenge

In this image I have assembled photos of skulls (or casts of skulls) of six extant carnivores. I exclusively used photos from the Skulls Unlimited website because they had all the taxa I wanted, lit about the same and photographed from similar angles. The omission of scale indicators is deliberate.

Your mission, should you choose to accept it, is to match these skulls with the animals they came from. Here are their currently-understood hierarchical relationships, scientific names, and common names (aside: I know this is ugly, is there a way to make nested tables in WordPress?).

Carnivora

– – Herpestoidea

– – – – Eupleridae

– – – – – – Fossa, Cryptoprocta ferox

– – – – Hyaenidae

– – – – – – Brown hyena, Hyaena brunnea

– – Arctoidea

– – – – Ursoidea

– – – – – – American black bear, Ursus americanus

– – – – Musteloidea

– – – – – – European badger, Meles meles

– – – – – – Wolverine, Gulo gulo

– – – – Pinnipedia

– – – – – – Mediterranean monk seal,  Monachus monachus

If you accept the challenge, leave your guesses as comments below, but only if you’ve played fair–no checking websites, references, or your own skull collection! Don’t worry about being wrong, I freely admit that I would have flunked this bigtime if anyone else had inflicted it on me. I decided to set up this challenge after I noticed the striking similarity between two of these critters in particular; I’ll tell you which two when I post the reveal in a day or two.