As recently noted, it was my pleasure and privilege on 25 June to give a talk at the ESOF2014 conference in Copenhagen (the EuroScience Open Forum). My talk was one of four, followed by a panel discussion, in a session on the subject “Should science always be open?“.

 

Banner

I had just ten minutes to lay out the background and the problem, so it was perhaps a bit rushed. But you can judge for yourself, because the whole session was recorded on video. The image is not the greatest (it’s hard to make out the slides) and the audio is also not all it could be (the crowd noise is rather loud). But it’s not too bad, and I’ve embedded it below. (I hope the conference organisers will eventually put out a better version, cleaned up by video professionals.)

Subbiah Arunachalam (from Arun, Chennai, India) asked me whether the full text of the talk was available — the echoey audio is difficult for non-native English speakers. It wasn’t but I’ve sinced typed out a transcript of what I said (editing only to remove “er”s and “um”s), and that is below. Finally, you may wish to follow the slides rather than the video: if so, they’re available in PowerPoint format and as a PDF.

Enjoy!

It’s very gracious of you all to hold this conference in English; I deeply appreciate it.

“Should science always be open?” is our question, and I’d like to open with one of the greatest scientists there’s ever been, Isaac Newton, who humility didn’t come naturally to. But he did manage to say this brilliant humble thing: “If I have seen further, it’s by standing on the shoulders of giants.”

And the reason I love this quote is not just because it’s insightful in itself, but because he stole it from something John of Salisbury said right back in 1159. “Bernard of Chartres used to say that we were like dwarfs seated on the shoulders of giants. If we see more and further than they, it is not due to our own clear eyes or tall bodies, but because we are raised on high and upborne by their gigantic bigness.”

Well, so Newton — I say he stole this quote, but of course he did more than that: he improved it. The original is long-winded, it goes around the houses. But Newton took that, and from that he made something better and more memorable. So in doing that, he was in fact standing on the shoulders of giants, and seeing further.

And this is consistently where progress comes from. It’s very rare that someone who’s locked in a room on his own thinking about something will have great insights. It’s always about free exchange of ideas. And we see this happening in lots of different fields.

Over the last ten or fifteen years, enormous advances in the kinds of things computers working in networks can do. And that’s come from the culture of openness in APIs and protocols, in Silicon Valley and elsewhere, where these things are designed.

Going back further and in a completely different field, the Impressionist painters of Paris lived in a community where they were constantly — not exactly working together, but certainly nicking each other’s ideas, improving each other’s techniques, feeding back into this developing sense of what could be done. Resulting in this fantastic art.

And looking back yet further, Florence in the Renaissance was a seat of all sorts of advances in the arts and the sciences. And again, because of this culture of many minds working together, and yielding insights and creativity that would not have been possible with any one of them alone.

And this is because of network effects; or Metcalfe’s Law expresses this by saying that the value of a network is proportional to the square of the number of nodes in that network. So in terms of scientific reasearch, what that means is that if you have a corpus of published research output, of papers, then the value of that goes — it doesn’t just increase with the number of papers, it goes up with the square of the number of papers. Because the value isn’t so much in the individual bits of research, but in the connections between them. That’s where great ideas come from. One researcher will read one paper from here and one from here, and see where the connection or the contradiction is; and from that comes the new idea.

So it’s very important to increase the size of the network of what’s available. And that’s why we have a very natural tendency, I think among scientists particularly, but I think we can say researchers in other areas as well, have a natural tendency to share.

Now until recently, the big difficulty we’ve had with sharing has been logistical. It was just difficult to make and distribute copies of pieces of research. So this [picture of a printing press] is how we made copies, this [picture of stacks of paper] was what we stored them on, and this was how we transmitted them from one researcher to another.

And they were not the most efficient means, or at least not as efficient as what we now have available. And because of that, and because of the importance of communication and the links between research, I would argue that maybe the most important invention of the last hundred years is the Internet in general and the World Wide Web in particular. And the purpose of the Web, as it was initially articulated in the first public post that Tim Berners-Lee made in 1991 — he explained not just what the Web was but what it was for, and he said: “The project started with the philosophy that much academic information should be freely available to anyone. It aims to allow information sharing within internationally dispersed teams, and the dissemination of information by support groups.”

So that’s what the Web is for; and here’s why it’s important. I’m quoting here from Cameron Neylon, who’s great at this kind of thing. And again it comes down to connections, and I’m just going to read out loud from his blog: “Like all developments of new communication networks, SMS, fixed telephones, the telegraph, the railways, and writing itself, the internet doesn’t just change how well we can do things, it qualitatively changes what we can do.” And then later on in the same post: “At network scale the system ensures that resources get used in unexpected ways. At scale you can have serendipity by design, not by blind luck.”

Now that’s a paradox; it’s almost a contradiction, isn’t it? Serendipity by definition is what you get by blind luck. But the point is, when you have enough connections — enough papers floating around the same open ecosystem — all the collisions happening between them, it’s inevitable that you’re going to get interesting things coming out. And that’s what we’re aiming towards.

And of course it’s never been more important, with health crises, new diseases, the diminishing effectiveness of antibiotics, the difficulties of feeding a world of many billions of people, and the results of climate change. It’s not as though we’re short of significant problems to deal with.

So I love this Jon Foley quote. He said, “Your job” — as a researcher — “Your job is not to get tenure! Your job is to change the world”. Tenure is a means to an end, it’s not what you’re there for.

So this is the importance of publishing. Of course the word “publish” comes from the same root as the word “public”: to publish a piece of research means to make that piece of research public. And the purpose of publishing is to open research up to the world, and so open up the world itself.

And that’s why it’s so tragic when we run into this [picture of a paywalled paper]. I think we’ve all seen this at various times. You go to read a piece of research that’s valuable, that’s relevant to either the research you’re doing, or the job you’re doing in your company, or whatever it might be. And you run into this paywall. Thirty five dollars and 95 cents to read this paper. It’s a disaster. Because what’s happened is we’ve got a whole industry whose existence is to make things public, and who because of accidents of history have found themselves doing the exact opposite. Now no-one goes into publishing with the intent of doing this. But this is the unfortunate outcome.

So what we end up with is a situation where we’re re-imposing on the research community barriers that were necessarily imposed by the inadequate technology of 20 or 30 years ago, but which we’ve now transcended in technological terms but we’re still strugging with for, frankly, commercial reasons. This is why we’re struggling with this.

And I don’t like to be critical, but I think we have to just face the fact that there is a real problem when organisations, for many years have been making extremely high profits — these [36%, 32%, 34%, 42%] are the profit margins of the “big four” academic publishers which together hugely dominate the scholarly publishing market — and as you can see they’re in the range 32% to 42% of revenue, is sheer profit. So every time your university library spends a dollar on subscriptions, 40% of that goes straight out of the system to nowhere.

And it’s not surprising that these companies are hanging on desperately to the business model that allows them to do that.

Now the problem we have in advocating for open access is that when we stand against publishers who have an existing very profitable business model, they can complain to governments and say, “Look, we have a market that’s economically significant, it’s worth somewhere in the region of 10-15 billion US dollars a year.” And they will say to governments, “You shouldn’t do anything that might damage this.” And that sounds effective. And we struggle to argue against that because we’re talking about an opportunity cost, which is so much harder to measure.

You know, I can stand here — as I have done — and wave my hands around, and talk about innovation and opportunity, and networks and connections, but it’s very hard to quantify in a way that can be persuasive to people in a numeric way. Say, they have a 15 billion dollar business, we’re talking about saving three trillion’s worth of economic value (and I pulled that number out of thin air). So I would love, if we can, when we get to the discussions, to brainstorm some way to quantify the opportunity cost of not being open. But this is what it looks like [picture of flooding due to climate change]. Economically I don’t know what it’s worth. But in terms of the world we live in, it’s just essential.

So we’ve got to remember the mission that we’re on. We’re not just trying to save costs by going to open access publishing. We’re trying to transform what research is, and what it’s for.

So should science always be open? Of course, the name of the session should have been “Of course science should always be open”.

 

[NOTE: see the updates at the bottom. In summary, there's nothing to see here and I was mistaken in posting this in the first place.]

Elsevier’s War On Access was stepped up last year when they started contacting individual universities to prevent them from letting the world read their research. Today I got this message from a librarian at my university:

babys-first-takedown

The irony that this was sent from the Library’s “Open Access Team” is not lost on me. Added bonus irony: this takedown notification pertains to an article about how openness combats mistrust and secrecy. Well. You’d almost think NPG wants mistrust and secrecy, wouldn’t you?

It’s sometimes been noted that by talking so much about Elsevier on this blog, we can appear to be giving other barrier-based publishers a free ride. If we give that impression, it’s not deliberate. By initiating this takedown, Nature Publishing Group has self-identified itself as yet another so-called academic publisher that is in fact an enemy of science.

So what next? Anyone who wants a PDF of this (completely trivial) letter can still get one very easily from my own web-site, so in that sense no damage has been done. But it does leave me wondering what the point of the Institutional Repository is. In practice it seems to be a single point of weakness allowing “publishers” to do the maximum amount of damage with a single attack.

But part of me thinks the thing to do is take the accepted manuscript and format it myself in the exact same way as Nature did, and post that. Just because I can. Because the bottom line is that typesetting is the only actual service they offered Andy, Matt and me in exchange for our right to show our work to the world, and that is a trivial service.

The other outcome is that this hardens my determination never to send anything to Nature again. Now it’s not like my research program is likely to turn up tabloid-friendly results anyway, so this is a bit of a null resolution. But you never know: if I happen to stumble across sauropod feather impressions in an overlooked Wealden fossil, then that discovery is going straight to PeerJ, PLOS, BMC, F1000 Research, Frontiers or another open-access publisher, just like all my other work.

And that’s sheer self-interest at work there, just as much as it’s a statement. I will not let my best work be hidden from the world. Why would anyone?

Let’s finish with another outing for this meme-ready image.

Publishers ... You're doing it wrong

Update (four hours later)

David Mainwaring (on Twitter) and James Bisset (in the comment below) both pointed out that I’ve not seen an actual takedown request from NPG — just the takedown notification from my own library. I assumed that the library were doing this in response to hassle from NPG, but of course it’s possible that my own library’s Open Access Team is unilaterally trying to prevent access to the work of its university’s researchers.

I’ve emailed Lyn Duffy to ask for clarification. In the mean time, NPG’s Grace Baynes has tweeted:

So it looks like this may be even more bizarre than I’d realised.

Further bulletins as events warrant.

Update 2 (two more hours later)

OK, consensus is that I read this completely wrong. Matt’s comment below says it best:

I have always understood institutional repositories to be repositories for author’s accepted manuscripts, not for publisher’s formatted versions of record. By that understanding, if you upload the latter, you’re breaking the rules, and basically pitting the repository against the publisher.

Which is, at least, not a nice thing to do to the respository.

So the conclusion is: I was wrong, and there’s nothing to see here apart from me being embarrassed. That’s why I’ve struck through much of the text above. (We try not to actually delete things from this blog, to avoid giving a false history.)

My apologies to Lyn Duffy, who was just doing her job.

Update 3 (another hour later)

This just in from Lyn Duffy, confirming that, as David and James guessed, NPG did not send a takedown notice:

Dear Mike,

This PDF was removed as part of the standard validation work of the Open Access team and was not prompted by communication from Nature Publishing. We validate every full-text document that is uploaded to Pure to make sure that the publisher permits posting of that version in an institutional repository. Only after validation are full-text documents made publicly available.

In this case we were following the regulations as stated in the Nature Publishing policy about confidentiality and pre-publicity. The policy says, ‘The published version — copyedited and in Nature journal format — may not be posted on any website or preprint server’ (http://www.nature.com/authors/policies/confidentiality.html). In the information for authors about ‘Other material published in Nature’ it says, ‘All articles for all sections of Nature are considered according to our usual conditions of publication’ (http://www.nature.com/nature/authors/gta/others.html#correspondence). We took this to mean that material such as correspondence have the same posting restrictions as other material published by Nature Publishing.

If we have made the wrong decision in this case and you do have permission from Nature Publishing to make the PDF of your correspondence publicly available via an institutional repository, we can upload the PDF to the record.

Kind regards,
Open Access Team

Appendix

Here’s the text of the original notification email so search-engines can pick it up. (If you read the screen-grab above, you can ignore this.)

University of Bristol — Pure

Lyn Duffy has added a comment

Sharing: public databases combat mistrust and secrecy
Farke, A. A., Taylor, M. P. & Wedel, M. J. 22 Oct 2009 In : Nature. 461, 7267, p. 1053

Research output: Contribution to journal › Article

Lyn Duffy has added a comment 7/05/14 10:23

Dear Michael, Apologies for the delay in checking your record. It appears that the document you have uploaded alongside this record is the publishers own version/PDF and making this version openly accessible in Pure is prohibited by the publisher, as a result the document has been removed from the record. In this particular instance the publisher would allow you to make accessible the postprint version of the paper, i.e., the article in the form accepted for publication in the journal following the process of peer review. Please upload an acceptable version of the paper if you have one. If you have any questions about this please get back to us, or send an email directly to open-access@bristol.ac.uk Kind regards, Lyn Duffy Library Open Access Team.

In discussion of Samuel Gershman’s rather good piece The Exploitative Economics Of Academic Publishing, I got into this discusson on Twitter with David Mainwaring (who is usually one of the more interesting legacy-publisher representatives on these issues) and Daniel Allingon (who I don’t know at all).

I’ll need to give a bit of background before I reach the key part of that discussion, so here goes. I said that one of David’s comments was a patronising evasion, and that I expected better of him, and also that it was an explicit refusal to engage. David’s response was interesting:

First, to clear up the first half, I wasn’t at all saying that David hasn’t engaged in OA, but that in this instance he’d rejected engagement — and that his previous record of engaging with the issues was why I’d said “I expect better from you” at the outset.

Now with all that he-said-she-said out of the way, here’s the point I want to make.

David’s tweet quoted above makes a very common but insidious assumption: that a “nuanced” argument is intrinsically preferable to a simple one. And we absolutely mustn’t accept that.

We see this idea again and again: open-access advocates are criticised for not being nuanced, with the implication that this equates with not being right. But the right position is not always nuanced. Recruiting Godwin to the cause of a reductio ad absurdum, we can see this by asking the question “was Hitler right to commit genocide?” If you say “no”, then I will agree with you; I won’t criticise your position for lacking nuance. In this argument, nuance is superfluous.

[Tedious but probably necessary disclaimer: no, I am not saying that paywall-encumbered publishing is morally equivalent to genocide. I am saying that the example of genocide shows that nuanced positions are not always correct, and that therefore it's wrong to assume a priori that a nuanced position regarding paywalls is correct. Maybe a nuanced position is correct: but that is something to be demonstrated, not assumed.]

So when David says “What I do hold to is that a rounded view, nuance, w/ever you call it, is important”, I have to disagree. What matters is to be right, not nuanced. Again, sometimes the right position is nuanced, but there’s no reason to assume that from the get-go.

Here’s why this is dangerous: a nuanced, balanced, rounded position sounds so grown up. And by contrast, a straightforward, black-and-white one sounds so adolescent. You know, a straightforward, black-and-white position like “genocide is bad”. The idea of nuance plays on our desire to be respected. It sounds so flattering.

We mustn’t fall for this. Our job is to figure out what’s true, not what sounds grown-up.

The Scholarly Kitchen is the blog of the Society of Scholarly Publishers, and as such discusses lots of issues that are of interest to us. But a while back, I gave up commenting there two reasons. First, it seemed rare that fruitful discussions emerged, rather than mere echo-chamberism; and second, my comments would often be deliberately delayed for several hours “to let others get in first”, and randomly discarded completely for reasons that I found completely opaque.

But since June, when David Crotty took over as Editor-in-Chief from Kent Anderson, I’ve sensed a change in the wind: more thoughtful pieces, less head-in-the-sandism over the inevitable coming changes in scholarly publishing, and even genuinely fruitful back-and-forth in the comments. I was optimistic that the Kitchen could become a genuine hub of cross-fertilisation.

But then, this: The Jack Andraka Story — Uncovering the Hidden Contradictions Behind a Science Folk Hero [cached copy]. Ex-editor Kent Anderson has risen from the grave to give us this attack piece on a fifteen-year-old.

I’m frankly astonished that David Crotty allowed this spiteful piece on the blog he edits. Is Kent Anderson so big that no-one can tell him “no”? Embarrassingly, he is currently president of the SSP, which maybe gives him leverage over the blog. But I’m completely baffled over how Crotty, Anderson or anyone else can think this piece will achieve anything other than to destroy the reputation of the Kitchen.

As Eva Amsen says, “I got as far as the part where he says Jack is not a “layperson” because his parents are middle class. (What?) Then closed tab.” I could do a paragraph-by-paragraph takedown of Anderson’s article, as Michael Eisen did for Jeffrey Beall’s anti-OA coming-out letter; but it really doesn’t deserve that level of attention.

So why am I even mentioning it? Because Jack Andraka doesn’t deserve to be hunted by a troll. I’m not going to be the only one finally giving up on The Scholarly Kitchen if David Crotty doesn’t do something to control his attack dog.

Seriously, David. You’re better than that. You have to be.

Reference

Anderson, Kent. 2014. The Jack Andraka Story — Uncovering the Hidden Contradictions Behind a Science Folk Hero. Society of Scholarly Publishers. The Scholarly Kitchen, Society of Scholarly Publishers. URL:http://scholarlykitchen.sspnet.org/2014/01/03/the-jack-andraka-story-uncovering-the-hidden-contradictions-of-an-oa-paragon/. Accessed: 2014-01-03. (Archived by WebCite® at http://www.webcitation.org/6MLiAaC9o)

I thought Elsevier was already doing all it could to alienate the authors who freely donate their work to shore up the corporation’s obscene profits. The thousands of takedown notices sent to Academia.edu represent at best a grotesque PR mis-step, an idiot manoeuvre that I thought Elsevier would immediately regret and certainly avoid repeating.

Which just goes to show that I dramatically underestimated just how much Elsevier hate it when people read the research they publish, and the lengths they’re prepared to go to when it comes to ensuring the work stays unread.

Now, they’re targeting individual universities.

The University of Calgary has just sent this notice to all staff:

The University of Calgary has been contacted by a company representing the publisher, Elsevier Reed, regarding certain Elsevier journal articles posted on our publicly accessible university web pages. We have been provided with examples of these articles and reviewed the situation. Elsevier has put the University of Calgary on notice that these publicly posted Elsevier journal articles are an infringement of Elsevier Reed’s copyright and must be taken down.

That’s it, folks. Elsevier have taken the gloves off. I’ve tried repeatedly to think the best of them, to interpret their actions in the most charitable light. I even wrote a four-part series on how they can regain the trust of researchers and librarians (part 0, part 1, part 2, part 3), under the evidently mistaken impression that that was what they wanted.

But now it’s apparent that I was far too optimistic. They have no interest in working with authors, universities, businesses or anyone else. They just want to screw every possible cent out of all parties in the short term.

Because this is, obviously, a very short-term move. Whatever feeble facade Elsevier have till now maintained of being partners in the ongoing process of research is gone forever. They’ve just tossed it away, instead desperately trying to cling onto short-term profit. In going after the University of Calgary (and I imagine other universities as well, unless this is a pilot harassment), Elsevier have declared their position as unrepentant enemies of science.

In essence, this move is an admission of defeat. It’s a classic last-throw-of-the-dice manoeuvre. It signals a recognition from Elsevier that they simply aren’t going to be able to compete with actual publishers in the 21st century. They’re burning the house down on their way out. They’re asset-stripping academia.

Elsevier are finished as a credible publisher. I can’t believe any researcher who knows what they’re doing is going to sign away their rights to Elsevier journals after this. I hope to see the editorial boards of Elsevier-encumbered journals breaking away from the dead-weight of the publisher, and finding deals that actually promote the work of those journals rather than actively hindering it.

And a reminder, folks: for those of you who want to publicly declare that you’re done with Elsevier, you can sign the Cost Of Knowledge declaration. That’s often been described as a petition, but it’s not. A petition exists to persuade someone to do something, but we’re not asking Elsevier to change. It’s evidently far, far too late for that. As a publisher, Elsevier is dead. The Cost of Knowledge is just a declaration that we’re walking away from the corpse before the stench becomes unbearable.

It’s now widely understood among researchers that the impact factor (IF) is a statistically illiterate measure of the quality of a paper. Unfortunately, it’s not yet universally understood among administrators, who in many places continue to judge authors on the impact factors of the journals they publish in. They presumably do this on the assumption that impact factor is a proxy for, or predictor of, citation count, which is turn is assumed to correlate with influence.

As shown by Lozano et al. (2012), the correlation between IF and citations is in fact very weak — r2 is about 0.2 — and has been progressively weakening since the dawn of the Internet era and the consequent decoupling of papers from the physical journal that they appear in. This is a counter-intuitive finding: given that the impact factor is calculated from citation counts you’d expect it to correlate much more strongly. But the enormous skew of citation rates towards a few big winners renders the average used by the IF meaningless.

To bring this home, I plotted my own personal impact-factor/citation-count graph. I used Google Scholar’s citation counts of my articles, which recognises 17 of my papers; then I looked up the impact factors of the venues they appeared in, plotted citation count against impact factor, and calculated a best-fit line through my data-points. Here’s the result (taken from a slide in my Berlin 11 satellite conference talk):

berlin11-satellite-taylor-what-we-can-do--impact-factor-graph

I was delighted to see that the regression slope is actually negative: in my case at least, the higher the impact factor of the venue I publish in, the fewer citations I get.

There are a few things worth unpacking on that graph.

First, note the proud cluster on the left margin: publications in venues with impact factor zero (i.e. no impact factor at all). These include papers in new journals like PeerJ, in perfectly respectable established journals like PaleoBios, edited-volume chapters, papers in conference proceedings, and an arXiv preprint.

My most-cited paper, by some distance, is Head and neck posture in sauropod dinosaurs inferred from extant animals (Taylor et al. 2009, a collaboration between all three SV-POW!sketeers). That appeared in Acta Palaeontologia Polonica, a very well-respected journal in the palaeontology community but which has a modest impact factor of 1.58.

My next most-cited paper, the Brachiosaurus revision (Taylor 2009), is in the Journal of Vertebrate Palaeontology — unquestionably the flagship journal of our discipline, despite its also unspectacular impact factor of 2.21. (For what it’s worth, I seem to recall it was about half that when my paper came out.)

In fact, none of my publications have appeared in venues with an impact factor greater than 2.21, with one trifling exception. That is what Andy Farke, Matt and I ironically refer to as our Nature monograph (Farke et al. 2009). It’s a 250-word letter to the editor on the subject of the Open Dinosaur Project. (It’ a subject that we now find profoundly embarrassing given how dreadfully slowly the project has progressed.)

Google Scholar says that our Nature note has been cited just once. But the truth is even better: that one citation is in fact from an in-prep manuscript that Google has dug up prematurely — one that we ourselves put on Google Docs, as part of the slooow progress of the Open Dinosaur Project. Remove that, and our Nature note has been cited exactly zero times. I am very proud of that record, and will try to preserve it by persuading Andy and Matt to remove the citation from the in-prep paper before we submit. (And please, folks: don’t spoil my record by citing it in your own work!)

What does all this mean? Admittedly, not much. It’s anecdote rather than data, and I’m posting it more because it amuses me than because it’s particularly persuasive. In fact if you remove the anomalous data point that is our Nature monograph, the slope becomes positive — although it’s basically meaningless, given that all my publications cluster in the 0–2.21 range. But then that’s the point: pretty much any data based on impact factors is meaningless.

References

 

Walk-in access? Seriously?

November 26, 2013

Reading the Government’s comments on the recent BIS hearing on open access, I see this:

As a result of the Finch Group’s work, a programme devised by publishers, through the Publishers Licensing Society, and without funding from Government, will culminate in a Public Library Initiative. A technical pilot was successfully started on 9 September 2013

Following the link provided, I read:

The Report recommended that the existing proposal to make the majority of journals available for free to walk-in users at public libraries throughout the UK should be supported and pursued vigorously.

I’m completely, completely baffled by this. The idea that people should get in a car and drive to a special magic building in order to read papers that their own computers are perfectly capable of downloading is so utterly wrong-headed I struggle to find words for it. It’s a nineteenth-century solution to a twentieth-century problem. In 2013.

Who thought this was a good idea?

And what were they smoking at the time?

I can tell you now that the take-up for this misbegotten initiative will be zero. Because although it’s a painful waste of time to negotiate the paywalls erected by those corporations we laughably call “publishers”, this “solution” will be more of a waste of time still. (Not to mention a waste of petrol).

I can only assume that was always the intention of the barrier-based publishers on the Finch committee that came up with this initiative: to deliver a stillborn access initiative that they can point to and say “See, no-one wants open access”. Meanwhile, everyone will be over on Twitter using #icanhazpdf and other such 21st-century workarounds.

Sheesh.

Suppose, hypothetically, that you worked for an organisation whose nominal goal is the advancement of science, but which has mutated into a highly profitable subscription-based publisher. And suppose you wanted to construct a study that showed the alternative — open-access publishing — is inferior.

What would you do?

You might decide that a good way to test publishers is by sending them an obviously flawed paper and seeing whether their peer-review weeds it out.

But you wouldn’t want to risk showing up subscription publishers. So the first thing you’d do is decide up front not to send your flawed paper to any subscription journals. You might justify this by saying something like “the turnaround time for traditional journals is usually months and sometimes more than a year. How could I ever pull off a representative sample?“.

Next, you’d need to choose a set of open-access journals to send it to. At this point, you would carefully avoid consulting the membership list of the Open Access Scholarly Publishers Association, since that list has specific criteria and members have to adhere to a code of conduct. You don’t want the good open-access journals — they won’t give you the result you want.

Instead, you would draw your list of publishers from the much broader Directory of Open Access Journals, since that started out as a catalogue rather than a whitelist. (That’s changing, and journals are now being cut from the list faster than they’re being added, but lots of old entries are still in place.)

Then, to help remove many of the publishers that are in the game only to advance research, you’d trim out all the journals that don’t levy an article processing charge.

But the resulting list might still have an inconveniently high proportion of quality journals. So you would bring down the quality by adding in known-bad publishers from Beall’s list of predatory open-access publishers.

Having established your sample, you’d then send the fake papers, wait for the journals’ responses, and gather your results.

To make sure you get a good, impressive result that will have a lot of “impact”, you might find it necessary to discard some inconvenient data points, omitting from the results some open-access journals that rejected the paper.

Now you have your results, it’s time to spin them. Use sweeping, unsupported generalisations like “Most of the players are murky. The identity and location of the journals’ editors, as well as the financial workings of their publishers, are often purposefully obscured.”

Suppose you have a quote from the scientist whose experiences triggered the whole project, and he said something inconvenient like “If [you] had targeted traditional, subscription-based journals, I strongly suspect you would get the same result”. Just rewrite it to say “if you had targeted the bottom tier of traditional, subscription-based journals”.

Now you have the results you want — but how will you ever get through through peer-review, when your bias is so obvious? Simple: don’t submit your article for peer-review at all. Classify it as journalism, so you don’t need to go through review, nor to get ethical approval for the enormous amount of editors’ and reviewers’ time you’ve wasted — but publish it in a journal that’s known internationally for peer-reviewed research, so that uncritical journalists will leap to your favoured conclusion.

Last but not least, write a press-release that casts the whole study as being about the “Wild West” of Open-Access Publishing.

Everyone reading this will, I am sure, have recognised that I’m talking about  John Bohannon’s “sting operation” in Science. Bohannon has a Ph.D. in molecular biology from Oxford University, so we would hope he’d know what actual science looks like, and that this study is not it.

Of course, the problem is that he does know what science looks like, and he’s made the “sting” operation look like it. It has that sciencey quality. It discusses methods. It has supplementary information. It talks a lot about peer-review, that staple of science. But none of that makes it science. It’s a maze of preordained outcomes, multiple levels of biased selection, cherry-picked data and spin-ridden conclusions. What it shows is: predatory journals are predatory. That’s not news.

Speculating about motives is always error-prone, of course, but it it’s hard not to think that Science‘s goal in all this was to discredit open-access publishing — just as legacy publishers have been doing ever since they realised OA was real competition. If that was their goal, it’s misfired badly. It’s Science‘s credibility that’s been compromised.

Update (9 October)

Akbar Khan points out yet more problems with Bohannon’s work: mistakes in attributing where given journals were listed, DOAJ or Beall’s list. As a result, the sample may be more, or less, biased than Bohannon reported.

 

 

 

What is an ad-hominem attack?

September 4, 2013

I recently handled the revisions on a paper that hopefully will be in press very soon. One of the review comments was “Be very careful not to make ad hominem attacks”.

I was a bit surprised to see that — I wasn’t aware that I’d made any — so I went back over the manuscript, and sure enough, there were no ad homs in there.

There was criticism, though, and I think that’s what the reviewer meant.

Folks, “ad hominem” has a specific meaning. An “ad hominem attack” doesn’t just mean criticising something strongly, it means criticising the author rather than the work. The phrase is Latin for “to the man”. Here’s a pair of examples:

  • “This paper by Wedel is terrible, because the data don’t support the conclusion” — not ad hominem.
  • “Wedel is a terrible scientist, so this paper can’t be trusted” – ad hominem.

What’s wrong with ad hominem criticism? Simply, it’s irrelevant to evaluation of the paper being reviewed. It doesn’t matter (to me as a scientist) whether Wedel strangles small defenceless animals for pleasure in his spare time; what matters is the quality of his work.

Note that ad hominems can also be positive — and they are just as useless there. Here’s another pair of examples:

  • “I recommend publication of Naish’s paper because his work is explained carefully and in detail” — not ad hominem.
  • “I recommend publication of Naish’s paper because he is a careful and detailed worker” — ad hominem.

It makes no difference whether Naish is a careful and detailed worker, or if he always buys his wife flowers on their anniversary, or even if he has a track-record of careful and detailed work. What matters is whether this paper, the one I’m reviewing, is good. That’s all.

As it happens the very first peer-review I ever received — for the paper that eventually became Taylor and Naish (2005) on diplodocoid phylogenetic nomenclature — contained a classic ad hominem, which I’ll go ahead and quote:

It seems to me perfectly reasonable to expect revisers of a major clade to have some prior experience/expertise in the group or in phylogenetic taxonomy before presenting what is intended to be the definitive phylogenetic taxonomy of that group. I do not wish to demean the capabilities of either author – certainly Naish’s “Dinosaurs of the Isle of Wight” is a praiseworthy and useful publication in my opinion – but I question whether he and Taylor can meet their own desiderata of presenting a revised nomenclature that balances elegance, consistency, and stability.

You see what’s happening here? The reviewer was not reviewing the paper, but the authors. There was no need for him or her to question whether we could meet our desiderata: he or she could just have read the manuscript and found out.

(Happy ending: that paper was rejected at the journal we first sent it to, but published at PaleoBios in revised form, and bizarrely is my equal third most-cited paper. I never saw that coming.)

Robin Osborne, professor of ancient history at King’s College, Cambridge, had an article in the Guardian yesterday entitled “Why open access makes no sense“. It was described by Peter Coles as “a spectacularly insular and arrogant argument”, by Peter Webster as an “Amazingly wrong-headed piece” and  by Glyn Moody as “easily the most arrogant & dim-witted article I’ve ever read on OA”.

Here’s my response (posted as a comment on the original article):

At a time when the world as a whole is waking up to the open-access imperative, it breaks my heart to read this fusty, elitist, reactionary piece, in which Professor Osborne ends up arguing strongly for his own irrelevance. What a tragic lack of vision, and of ambition.

There is still a discussion to be had over what routes to take to universal open access, how quickly to move, and what other collateral changes need to be made (such as changing how research is evaluated for the purposes of job-searches and promotion). But Osborne’s entitled bleat is no part of that discussion. He has opted out.

The fundamental argument for providing open access to academic research is that research that is funded by the tax-payer should be available to the tax-payer.

That is not the fundamental argument for providing open access (although it’s certainly a compelling secondary one). The fundamental argument is that the job of a researcher is to create new knowledge and understanding; and that it’s insane to then take that new knowledge and understanding and lock it up where only a tiny proportion of the population can benefit from it. That’s true whether the research is funded publicly or by a private charity.

The problem is that the two situations are quite different. In the first case [academic research], I propose both the research questions and the dataset to which I apply them. In the second [commercial research] the company commissioning the work supplies the questions.

Osborne’s position here seem to be that because he is more privileged than a commercial researcher in one respect (being allowed to choose the subject of his research) he should also be more privileged in another (being allowed to choose to restrict his results to an elite). How can such an attitude be explained? I find it quite baffling. Why would allowing researchers to choose their own subjects mean that funders would be happy to allow the results to be hidden from the world?

Publishing research is a pedagogical exercise, a way of teaching others

Yes. Which is precisely why there is no justification for withholding it from those others.

At the end of the day the paper published in a Gold open access journal becomes less widely read. [...] UK scholars who are obliged to publish in Gold open access journals will end up publishing in journals that are less international and, for all that access to them is cost-free, are less accessed in fact. UK research published through Gold open access will end up being ignored.

As a simple matter of statistics, this is flatly incorrect. Open-access papers are read, and cited, significantly more than paywalled papers. The meta-analysis of Swan (2010) surveyed 31 previous studies of the open-access citation advantage, showing that 27 of them found advantages of between 45% are 600%. I did a rough-and-ready calculation on the final table of that report, averaging the citation advantages given for each of ten academic fields (using the midpoints of ranges when given), and found that on average open-access articles are cited 176% more often — that is, 2.76 times as often — as non-open.

There can be no such thing as free access to academic research. Academic research is not something to which free access is possible.

… because saying it twice makes it more true.

Like it or not, the primary beneficiary of research funding is the researcher, who has managed to deepen their understanding by working on a particular dataset.

Just supposing this strange assertion is true (which I don’t at all accept), I’m left wondering what Osborne thinks the actual purpose of his research is. On what basis does he think our taxes should pay him to investigate questions which (as he himself reminds us) he has chosen as being of interest to him? Does he honestly believe that the state owes him not just a living, but a living doing the work that he chooses on the subject that he chooses with no benefit accruing to anyone but him?

No, it won’t do. We fund research so that we can all be enriched by the new knowledge, not just an entited elite. Open access is not just an economic necessity, it’s a moral imperative.

Follow

Get every new post delivered to your Inbox.

Join 376 other followers