Crowdsourcing a database of “predatory OA journals”
December 6, 2012
Its often been noted that under the author-pays model of publication (Gold open access), journals have a financial incentive to publish as many articles as possible so as to collect as many article processing charges as possible. In the early days of PLOS ONE, Nature famously described it as “relying on bulk, cheap publishing of lower quality papers“.
As the subsequent runaway success of PLOS ONE has shown, that fear was misplaced: real journals will always value their reputation above quick-hit APC income, and that’s reflected by the fact that PLOS ONE papers are cited more often, on average, than those in quality traditional palaeo journals such as JVP and Palaeontology.
But the general concern remains a real one: for every PLOS, there is a Bentham Open. It’s true that anyone who wants to publish an academic paper, however shoddy, will certainly be able to find a “publisher” of some kind that will take it — for a fee. This problem of “predatory publishers” was highlighted in a Nature column three months ago; and the ethical standard of some of the publishers in question was neatly highlighted as they contributed comments on that column, posing as well-known open-access advocates.
The author of that Nature column, Jeffrey Beall, maintains Beall’s List of Predatory, Open-Access Publishers, a useful annotated list of publishers that he has judged to fall into this vanity-publishing category. The idea is that if you’re considering submitting a manuscript to a journal that you don’t already know and trust, you can consult the list to see whether its publisher is reputable.
[An aside: I find a simpler solution is not to send my work to journals that I don't already know and trust, and I don't really understand why anyone would do that.]
Towards a better solution
Beall’s list has done sterling work over the last few years, but as the number of open access publishers keeps growing, it’s starting to creak. It’s not really possible for one person to keep track of the entire field. More important, it’s not really desirable for any one person to exercise so much power over the reputation of publishers. For example, the comment trail on the list shows that Hindawi was quite unjustly included for some time, and even now remains on the “watchlist” despite having a good reputation elsewhere.
We live in a connected and open world, where crowdsourcing has built the world’s greatest encyclopaedia, funded research projects and assembled the best database of resources for solving programming problems. We ought to be able to do better together than any one person can do alone — giving us better coverage, and freeing the resource from the potential of bias, whether intended or unintended.
I’ve had this idea floating around for a while, but I was nudged into action today by a Twitter discussion with Richard Poynder and Cameron Neylon. [I wish there was a good way to link to a tree-structured branching discussion on Twitter. If you want to try to reconstruct it you could start here and trying following some links.]
What might a better solution look like?
Richard was cautious about how this might work, as he should be. He suggested a wiki at first, but I think we’d need something more structured, because wikis suffer from last-edit-wins syndrome. I imagine some kind of voting system — perhaps resembling how stories are voted up and down (and commented on) in Reddit, or maybe more like the way questions are handled in Stack Exchange.
Either way, it would be better if we could use and adapt an existing service rather than building new software from the ground up (even though that’s always my natural tendency). Maybe better still would be to use an existing hosted service: for example, we might be able to get a surprisingly long way just by creating a subreddit, posting an entry for each publisher, then commenting and voting as in any other subreddit.
Cameron had another concern: that it’s hard to build and maintain a blacklist, because the number of predatory publishers is potentially unlimited. There are other reasons to prefer a whitelist — it’s nice to be positive! — and Cameron suggested that the membership of the Open Access Scholarly Publishers Association (OASPA) might make a good starting point.
My feeling is that, while a good solution could certainly say positive things about good publishers as well as negative things about bad publishers, we do need it to produce (among other things) a blacklist, if only to be an alternative to Beall’s one. Since that’s the only game in town, it has altogether too much power at the moment. Richard Poynder distrusts voting systems, but the current state of the art when it comes to predatory publisher lists is that we have a one-man-one-vote system, and Jeffrey Beall is the one man who has the one vote.
We have to be able to do better than that.
Thoughts? Ideas? Suggestions? Offers? Disagreements?