moo-2011-flamingo-neck

But not “funny ha-ha”. More like, “funny how that neck is clearly impossible.” I mean, really.

This is another shot from the Museum of Osteology in Oklahoma City. A few hundred more posts like this and I’ll be done.

For more flamingo-related weirdness, check out Casey Holliday’s work (with Ryan Ridgely, Amy Balanoff, and Larry Witmer) on the wacky blood vessels in flamingo heads. Unfortunately, Holliday et al. found no evidence of the antigravity generators that are obviously present in flamingoes somewhere. So there’s more work to be done here.

Kinda makes me sad, to ponder all of the sweet soft-tissue adaptations that extinct organisms must have had, that we will probably never know (enough) about. At least we have freaks like this around to remind us.

Advertisements

The LSE Impact blog has a new post, Berlin 11 satellite conference encourages students and early stage researchers to influence shift towards Open Access. Thinking about this,  Jon Tennant (@Protohedgehog) just tweeted this important idea:

Would be nice to see a breakdown of OA vs non-OA publications based on career-stage of first author. Might be a wake-up call.

It would be very useful. It makes me think of Zen Faulkes’s important 2011 blog-post, What have you done lately that needed tenure?. We should be seeing the big push towards open access coming from senior academics who are established in their roles don’t need to scrabble around for jobs like early-career researchers. Yet my impression is that in fact early-career researchers are doing a lot of the pro-open heavy lifting.

Is that impression true?

We should find out.

Here’s one possible experimental design: take a random sample of 100 Ph.D students, 100 post-docs, 100 early-career researchers in tenure-track jobs and 100 tenured researchers. For each of them, analyse their last ten years of publications and determine what proportion are paywalled, what proportion are free to read (e,g, on arXiv or in an all-rights-reserved IR), and what proportion are true (BOAI-compliant) open access.

An alternative approach would be to randomly sample 1000 open-access papers (from PLOS and BMC journals, for example), and 1000 paywalled papers (from Elsevier and Springer, say) and find the career-stage of their authors. I’m not sure which approach would be better?

Who is going to do this?

I think it would be a nice, tractable first project for someone who wants to get into academic research but hasn’t previously published. It would be hugely useful, and I’m guessing widely cited. Does anyone fancy it?

Update

Georg Walther has started a hackpad about this nascent project. Since Jon “Protohedgehog” Tennant has now tweeted about it, I assume it’s OK to publicise. If you’re interested, feel free to leap in!

Suppose, hypothetically, that you worked for an organisation whose nominal goal is the advancement of science, but which has mutated into a highly profitable subscription-based publisher. And suppose you wanted to construct a study that showed the alternative — open-access publishing — is inferior.

What would you do?

You might decide that a good way to test publishers is by sending them an obviously flawed paper and seeing whether their peer-review weeds it out.

But you wouldn’t want to risk showing up subscription publishers. So the first thing you’d do is decide up front not to send your flawed paper to any subscription journals. You might justify this by saying something like “the turnaround time for traditional journals is usually months and sometimes more than a year. How could I ever pull off a representative sample?“.

Next, you’d need to choose a set of open-access journals to send it to. At this point, you would carefully avoid consulting the membership list of the Open Access Scholarly Publishers Association, since that list has specific criteria and members have to adhere to a code of conduct. You don’t want the good open-access journals — they won’t give you the result you want.

Instead, you would draw your list of publishers from the much broader Directory of Open Access Journals, since that started out as a catalogue rather than a whitelist. (That’s changing, and journals are now being cut from the list faster than they’re being added, but lots of old entries are still in place.)

Then, to help remove many of the publishers that are in the game only to advance research, you’d trim out all the journals that don’t levy an article processing charge.

But the resulting list might still have an inconveniently high proportion of quality journals. So you would bring down the quality by adding in known-bad publishers from Beall’s list of predatory open-access publishers.

Having established your sample, you’d then send the fake papers, wait for the journals’ responses, and gather your results.

To make sure you get a good, impressive result that will have a lot of “impact”, you might find it necessary to discard some inconvenient data points, omitting from the results some open-access journals that rejected the paper.

Now you have your results, it’s time to spin them. Use sweeping, unsupported generalisations like “Most of the players are murky. The identity and location of the journals’ editors, as well as the financial workings of their publishers, are often purposefully obscured.”

Suppose you have a quote from the scientist whose experiences triggered the whole project, and he said something inconvenient like “If [you] had targeted traditional, subscription-based journals, I strongly suspect you would get the same result”. Just rewrite it to say “if you had targeted the bottom tier of traditional, subscription-based journals”.

Now you have the results you want — but how will you ever get through through peer-review, when your bias is so obvious? Simple: don’t submit your article for peer-review at all. Classify it as journalism, so you don’t need to go through review, nor to get ethical approval for the enormous amount of editors’ and reviewers’ time you’ve wasted — but publish it in a journal that’s known internationally for peer-reviewed research, so that uncritical journalists will leap to your favoured conclusion.

Last but not least, write a press-release that casts the whole study as being about the “Wild West” of Open-Access Publishing.

Everyone reading this will, I am sure, have recognised that I’m talking about  John Bohannon’s “sting operation” in Science. Bohannon has a Ph.D. in molecular biology from Oxford University, so we would hope he’d know what actual science looks like, and that this study is not it.

Of course, the problem is that he does know what science looks like, and he’s made the “sting” operation look like it. It has that sciencey quality. It discusses methods. It has supplementary information. It talks a lot about peer-review, that staple of science. But none of that makes it science. It’s a maze of preordained outcomes, multiple levels of biased selection, cherry-picked data and spin-ridden conclusions. What it shows is: predatory journals are predatory. That’s not news.

Speculating about motives is always error-prone, of course, but it it’s hard not to think that Science‘s goal in all this was to discredit open-access publishing — just as legacy publishers have been doing ever since they realised OA was real competition. If that was their goal, it’s misfired badly. It’s Science‘s credibility that’s been compromised.

Update (9 October)

Akbar Khan points out yet more problems with Bohannon’s work: mistakes in attributing where given journals were listed, DOAJ or Beall’s list. As a result, the sample may be more, or less, biased than Bohannon reported.

 

 

 

For a palaeontology blog, we don’t talk a lot about geology. Time to fix that, courtesy of my middle son Matthew, currently 13 years old, who made this helpful guide to the rock cycle as Geology homework.

rocky1

rocky2

rocky3

An extraordinary study has come to light today, showing just how shoddy peer-review standards are at some journals.

Evidently fascinated by Science‘s eagerness to publish the fatally flawed Arsenic Life paper, John Bohannon conceived the idea of constructing a study so incredibly flawed that it didn’t even include a control. His plan was to see whether he could get it past the notoriously lax Science peer-review provided it appealed strongly enough to that journal’s desire for “impact” (designed as the ability to generate headlines) and pandered to its preconceptions (that its own publication model is the best one).

So Bohannon carried out the most flawed study he could imagine: submitting fake papers to open-access journals selected in part from Jeffrey Beall’s list of predatory publishers without sending any of his fake papers to subscription journals, noting that many of the journals accepted the papers, and drawing the flagrantly unsupported conclusion that open-access publishing is flawed.

Incredibly, Science not only published this study, but made it the lead story of today’s issue.

It’s hard to know where Science can go from here. Having fallen for Bohannon’s sting, its credibility is shot to pieces. We can only assume that the AAAS will now be added to Beall’s list of predatory publishers.

Rolling updates

Here are some other responses to the Science story: