Anti-tutorial: how to design and execute a really bad study

October 7, 2013

Suppose, hypothetically, that you worked for an organisation whose nominal goal is the advancement of science, but which has mutated into a highly profitable subscription-based publisher. And suppose you wanted to construct a study that showed the alternative — open-access publishing — is inferior.

What would you do?

You might decide that a good way to test publishers is by sending them an obviously flawed paper and seeing whether their peer-review weeds it out.

But you wouldn’t want to risk showing up subscription publishers. So the first thing you’d do is decide up front not to send your flawed paper to any subscription journals. You might justify this by saying something like “the turnaround time for traditional journals is usually months and sometimes more than a year. How could I ever pull off a representative sample?“.

Next, you’d need to choose a set of open-access journals to send it to. At this point, you would carefully avoid consulting the membership list of the Open Access Scholarly Publishers Association, since that list has specific criteria and members have to adhere to a code of conduct. You don’t want the good open-access journals — they won’t give you the result you want.

Instead, you would draw your list of publishers from the much broader Directory of Open Access Journals, since that started out as a catalogue rather than a whitelist. (That’s changing, and journals are now being cut from the list faster than they’re being added, but lots of old entries are still in place.)

Then, to help remove many of the publishers that are in the game only to advance research, you’d trim out all the journals that don’t levy an article processing charge.

But the resulting list might still have an inconveniently high proportion of quality journals. So you would bring down the quality by adding in known-bad publishers from Beall’s list of predatory open-access publishers.

Having established your sample, you’d then send the fake papers, wait for the journals’ responses, and gather your results.

To make sure you get a good, impressive result that will have a lot of “impact”, you might find it necessary to discard some inconvenient data points, omitting from the results some open-access journals that rejected the paper.

Now you have your results, it’s time to spin them. Use sweeping, unsupported generalisations like “Most of the players are murky. The identity and location of the journals’ editors, as well as the financial workings of their publishers, are often purposefully obscured.”

Suppose you have a quote from the scientist whose experiences triggered the whole project, and he said something inconvenient like “If [you] had targeted traditional, subscription-based journals, I strongly suspect you would get the same result”. Just rewrite it to say “if you had targeted the bottom tier of traditional, subscription-based journals”.

Now you have the results you want — but how will you ever get through through peer-review, when your bias is so obvious? Simple: don’t submit your article for peer-review at all. Classify it as journalism, so you don’t need to go through review, nor to get ethical approval for the enormous amount of editors’ and reviewers’ time you’ve wasted — but publish it in a journal that’s known internationally for peer-reviewed research, so that uncritical journalists will leap to your favoured conclusion.

Last but not least, write a press-release that casts the whole study as being about the “Wild West” of Open-Access Publishing.

Everyone reading this will, I am sure, have recognised that I’m talking about  John Bohannon’s “sting operation” in Science. Bohannon has a Ph.D. in molecular biology from Oxford University, so we would hope he’d know what actual science looks like, and that this study is not it.

Of course, the problem is that he does know what science looks like, and he’s made the “sting” operation look like it. It has that sciencey quality. It discusses methods. It has supplementary information. It talks a lot about peer-review, that staple of science. But none of that makes it science. It’s a maze of preordained outcomes, multiple levels of biased selection, cherry-picked data and spin-ridden conclusions. What it shows is: predatory journals are predatory. That’s not news.

Speculating about motives is always error-prone, of course, but it it’s hard not to think that Science‘s goal in all this was to discredit open-access publishing — just as legacy publishers have been doing ever since they realised OA was real competition. If that was their goal, it’s misfired badly. It’s Science‘s credibility that’s been compromised.

Update (9 October)

Akbar Khan points out yet more problems with Bohannon’s work: mistakes in attributing where given journals were listed, DOAJ or Beall’s list. As a result, the sample may be more, or less, biased than Bohannon reported.

 

 

 

9 Responses to “Anti-tutorial: how to design and execute a really bad study”

  1. Kenneth Carpenter Says:

    Mike,
    I have long been bothered by the “editorials” and “investigative reporting” that seem to take up more and more of the pages of Science and Nature. Many of these “articles” would never pass peer review, yet few readers question what they read because these articles appear in respectable journals – so they must be true and unbiased! (sure, when sauropods sprout wings and fly).


  2. […] Me again, this time with the gloves off: Anti-tutorial: how to design and execute a really bad study […]


  3. Yes, it seems he intends to target OA journals.
    Yes, lots of them screwed up.
    Yes, PLOS ONE did it right :)


  4. […] that we all know that Bohannon’s Science “sting” was embarrassing pseudo-science, it seems well […]


  5. […] El de Bohannon es un estudio que parte de las conclusiones para confirmarlas, que a tal fin escoge la muestra que le interesa, y ni siquiera incluye un mínimo análisis estadístico de datos. Es interesante (aunque […]


  6. […] I’m a sucker for good satire. In a recent post I referenced Dorothea Salo’s delightfully satirical article, “How to Scuttle a Scholarly Communication Initiative” where she lays out a detailed agenda for dissuading academic libraries from effective participation in scholarly communication activities on their campuses. This week, while trying to find the best hook for posting about the ‘sting operation’ conducted on a selection of open access journals recently reported in the journal Science, I landed on Mike Taylor’s October 7, 2013 blog post, “Anti-tutorial: how to design and execute a really bad study.” […]


  7. […] har väckt mycket diskussioner och även några välformulerade satiriska inlägg, som t.ex. Dr. Mike Taylors Anti-tutorial: how to design and execute a really bad study. Mike Tylor är paleontologist och arbetar på Department of Earth Sciences vid University of […]


  8. […] article has created a lot of discussions and even some well-written satirical blog posts, e.g. Dr. Mike Taylors Anti-tutorial: how to design and execute a really bad study. Mike Taylor is  paleontologist in the Department of Earth Sciences at the University of Bristol, […]


  9. […] — the OA community generally responded by attacking Bohannon. (See, for example, here, here, and [especially] here; the DOAJ’s response, in which it accused Bohannon of racism, has […]


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: