Crowdsourcing a database of “predatory OA journals”

December 6, 2012

The problem

Its often been noted that under the author-pays model of publication (Gold open access), journals have a financial incentive to publish as many articles as possible so as to collect as many article processing charges as possible. In the early days of PLOS ONE, Nature famously described it as “relying on bulk, cheap publishing of lower quality papers“.

As the subsequent runaway success of PLOS ONE has shown, that fear was misplaced: real journals will always value their reputation above quick-hit APC income, and that’s reflected by the fact that PLOS ONE papers are cited more often, on average, than those in quality traditional palaeo journals such as JVP and Palaeontology.

But the general concern remains a real one: for every PLOS, there is a Bentham Open. It’s true that anyone who wants to publish an academic paper, however shoddy, will certainly be able to find a “publisher” of some kind that will take it — for a fee. This problem of “predatory publishers” was highlighted in a Nature column three months ago; and the ethical standard of some of the publishers in question was neatly highlighted as they contributed comments on that column, posing as well-known open-access advocates.

A solution

The author of that Nature column, Jeffrey Beall, maintains Beall’s List of Predatory, Open-Access Publishers, a useful annotated list of publishers that he has judged to fall into this vanity-publishing category. The idea is that if you’re considering submitting a manuscript to a journal that you don’t already know and trust, you can consult the list to see whether its publisher is reputable.

[An aside: I find a simpler solution is not to send my work to journals that I don’t already know and trust, and I don’t really understand why anyone would do that.]

Towards a better solution

Beall’s list has done sterling work over the last few years, but as the number of open access publishers keeps growing, it’s starting to creak. It’s not really possible for one person to keep track of the entire field. More important, it’s not really desirable for any one person to exercise so much power over the reputation of publishers. For example, the comment trail on the list shows that Hindawi was quite unjustly included for some time, and even now remains on the “watchlist” despite having a good reputation elsewhere.

We live in a connected and open world, where crowdsourcing has built the world’s greatest encyclopaedia, funded research projects and assembled the best database of resources for solving programming problems. We ought to be able to do better together than any one person can do alone — giving us better coverage, and freeing the resource from the potential of bias, whether intended or unintended.

I’ve had this idea floating around for a while, but I was nudged into action today by a Twitter discussion with Richard Poynder and Cameron Neylon. [I wish there was a good way to link to a tree-structured branching discussion on Twitter. If you want to try to reconstruct it you could start here and trying following some links.]

What might a better solution look like?

Richard was cautious about how this might work, as he should be. He suggested a wiki at first, but I think we’d need something more structured, because wikis suffer from last-edit-wins syndrome. I imagine some kind of voting system — perhaps resembling how stories are voted up and down (and commented on) in Reddit, or maybe more like the way questions are handled in Stack Exchange.

Either way, it would be better if we could use and adapt an existing service rather than building new software from the ground up (even though that’s always my natural tendency). Maybe better still would be to use an existing hosted service: for example, we might be able to get a surprisingly long way just by creating a subreddit, posting an entry for each publisher, then commenting and voting as in any other subreddit.

Cameron had another concern: that it’s hard to build and maintain a blacklist, because the number of predatory publishers is potentially unlimited. There are other reasons to prefer a whitelist — it’s nice to be positive! — and Cameron suggested that the membership of the Open Access Scholarly Publishers Association (OASPA) might make a good starting point.

My feeling is that, while a good solution could certainly say positive things about good publishers as well as negative things about bad publishers, we do need it to produce (among other things) a blacklist, if only to be an alternative to Beall’s one. Since that’s the only game in town, it has altogether too much power at the moment. Richard Poynder distrusts voting systems, but the current state of the art when it comes to predatory publisher lists is that we have a one-man-one-vote system, and Jeffrey Beall is the one man who has the one vote.

We have to be able to do better than that.

Thoughts? Ideas? Suggestions? Offers? Disagreements?

21 Responses to “Crowdsourcing a database of “predatory OA journals””

  1. “Jeffrey Beall is the one man who has the one vote. We have to be able to do better than that”. Don’t have much to offer, other than I fully concur with that quote.

  2. Andy Farke Says:

    I would argue that voting should, at best, be only a small component of any rating system. Beall has posted a list of criteria for determining predatory OA publishers, so perhaps a more balanced approach would be to evaluate these criteria for all relevant publishers. The result might be an annotated table for each publisher, with a checklist showing the various criteria and links to appropriate documentation (similar to tables I’ve seen comparing Linux distros, for instance). This would take far more work, but in the end I think it would be more useful. It would also enable a more transparent percentage grade for each publisher, similar to such ratings done for educational standards in individual states, or whatever. As an author, I want a standardized comparison of why certain publishers are considered predatory, rather than trying to wade through a sea of comments.

  3. A. Legate Says:

    This isn’t 100% on topic, but do you think you (or another big name OA person) could pop on over to
    and engage with the philosophers?

  4. This is an awesome idea, but at some point it might be just easier to keep a white list than a black list. I would suggest a stackexchange approach to start. In particular, Academia StackExchange already has discussions about how to identify a predatory publisher, how to judge quality (where I offer the same advice as you do in your post), how much to pay, and inquiries about specific journals. A good approach would be to start a community wiki question there, if you are not an active user of academia SE then I can ask on your behalf on the meta. We might even be able to get some special support from the SE overlords.

  5. Mike Taylor Says:

    Andy, I kind of agree that what you propose would be better. But it would also be a great deal more work, and so require a lot more activation energy. I don’t know if we can make that happen — or, at least, in order for it to happen it’ll need someone to put in a lot more effort than I am prepared to put it. I wonder whether it might not be more realistic to do something more lightweight that nevertheless represents a big step forward over the current situation.

    [Of course, I still think the real solution is just for people not to submit to journals that they know nothing about. But there’s at least a perception that that’s not enough of an answer.]

    A. Legate, I actually have a post in the works that addresses the APC-fears of that post you link to. I’ll try to finish that up some time today or tomorrow.

    Thanks, Artem, for the StackExchange links — I didn’t know those guys were already onto these issues, to some extent. I’ll investigate.

  6. I do not think this is a task for OASPA, or any other publishers’ organisation. I am convinced that it needs to be driven by researchers. Quite apart from the potential conflicts of interest that publishers must inevitably experience in assessing and judging other publishers, it is researchers that will have direct experience of the service (or lack of it) provided by predatory publishers.

    I have no great thoughts on what platform to use. I will however say that, while I acknowledge that using a wiki would mean adopting a last-edit-wins approach, I am conscious that one of the problems with Beall’s list is that publishers’ names often disappear overnight without any explanation as to why they have been removed. With a wiki you do at least have an edit history.

    But here, in my view, is the real challenge:

    As someone who has interviewed some allegedly predatory publishers, I frequently get emails sent to me, and posts submitted to my blog, containing complaints and claims about these publishers. However, I am generally not able to do anything with these emails/posts, either because they contain unsubstantiated allegations about the publishers, or because the people who send/post the comments are not willing to be named. I try not to publish anonymous allegations about publishers if I can help it.

    It seems to me then that it would be hard to get people to share their personal experiences of predatory publishers (which is what you surely do need if you want to provide a meaningful assessment) unless you allow anonymous comments. But if you allow anonymous comments you will get a welter of unsubstantiated claims from the usual trolls and troublemakers. I personally don’t think you should allow anonymous comments, but if you don’t you are likely to get little in the way of feedback.

  7. Mike Taylor Says:

    I just now found the 2013 version of Beall’s list. Hindawi no longer appears. Unfortunately, the new list lacks the brief discussion of each publisher than the old version had.

  8. Mike Taylor Says:

    I also tweeted Beall inviting him to comment on this post. It will be interesting to see what he makes of it, and how he suggests we proceed.

  9. I will be co-moderating a session on open access journals at Science Online 2013 about issues like this.

    Maybe we’ll solve the problem there, in our hour or so of discussion. Hope springs eternal, right?

  10. Mike, thanks for this post. I welcome alternatives to my list. If someone can figure out a better way to do what I’ve been trying to do, then more power to them. Competition is good! Let me know if I can help in any way.

  11. Mike Taylor Says:

    Thanks, Jeffrey, much appreciated. As the pioneer in this area, your thoughts on the ideas we have floating around will be particularly appreciated. Any suggestion regarding platform?

  12. my $0.02:

    I think it’d be dangerous to rely upon weight of numbers for a rating system. E.g. “42 (anonymous) people gave this journal a 5 star rating”. I’d worry about gaming and people posting fake positive stats & fake negative stats for various reasons.

    Instead, how about a review collating website akin to ?

    People could post their reviews of the experience of actually publishing papers in journal X e.g. this helpful review of MDPI by Phillip Lord:

    People could give accurate data on submission to publishing time (not the publisher fudged version of events…), overall ease of manuscript submission etc… But importantly i’d make it completely non-anonymous – expose the email address of the reviewer perhaps?

    This would also be a nice positive kinda ‘whitelist-ish’ way of doing it.
    Published in a journal that you *know* is legitimate but perhaps less heard-of or new? Tell people about it & help that journal build-up its reputation by publicly vouching for it.

    As for some of the entities on Beall’s list – they’re so poor I wonder who would even *think* of publishing there. Anyone can setup a website & paypal to ask for ‘$200 to publish your manuscript’ and Beall could find this and call it a heinous ‘predatory OA journal’. Many on the list are seriously amateur scams and no more than that. I’m not all that worried about this ‘problem’ or its ‘growth’ tbh…

  13. Jeroen Bosman Says:

    I’d suggest to 1) make a list fully evidence based, so no voting-based list and 2) to try to cooperate with DOAJ to make their listing of journals richer (e.g. add indication of peer review), and set very clear criteria for inclusion in that list and also list journals that do not meet those criteria. The latter would than be your desired black list.

  14. +1 Jeroen

    further enriching the data on DOAJ (and working with them, with their blessing) would be a good thing to do, to further highlight really good OA journals perhaps? Even adding exact APC-fee data would be good (*if* the journal charges an APC, most DOAJ-listed journals do not charge an APC).

    The only thing I’d query is what constitutes ‘evidence’ of journal/article quality? I think it’s very hard to provide completely objective evidence.

  15. Mike Taylor Says:

    Agreed: the more I think about this, the more I wonder whether enriching DOAJ might not be the best path to take.

    Does anyone know the people who run it?

  16. A little late to the party but just wanted to put down both some concerns and some ideas. Concerns first:

    The fundamental challenge with a black list is that it has to be bang up to date to be useful and this is difficult to resource. You’ve also got to have really robust evidence and mechanisms in place. What happens if someone complains that a publisher wasn’t on the blacklist? And of course what happens if someone is…either by accident or because they should have been but are now trying to make amends.

    This is the reason I favour a whitelist approach – its easier to resource and more reliable in as much as you know that a publisher/journal is on the list it has been through some sort of vetting process. It also means there’s a possible business model in that you could in principle charge for certification.

    Of course, this raises a potential conflict of interest, and as Richard and Mike have pointed out self-regulation may not be either the best approach or credible. My concern with “researchers” is that I suspect the researchers who care enough to engage also probably have strong links with existing journals and publishers so are not as much outsiders as we might think.

    So where does that leave us? Well I would argue for a Whitelist approach (perhaps with a blacklist for really serious repeat offenders) that could be based on a combination of existing critieria after a community consultation. I’d argue for expert vetting rather than voting but there could be a crowdsourced notification mechanism as well. What would be ideal would be that it was managed by a third party organisation that had community credibility and wasn’t tied too strongly to any one set of stakeholders.

    Lots of questions to be worked through in terms of governance, sustainability models, and criteria but it does seem like a reasonable approach to take.

  17. Hans Pfeiffenberger Says:

    @Cameron: you wrote “It also means there’s a possible business model in that you could in principle charge for certification.”
    What about predatory certification (as with the Bangladesh factory which burnt recently)?

  18. Some related discussion here on FriendFeed:-

  19. I thought the blog was an interesting read until I got to the comments. Rating systems do work but it always better if they are moderated by a governing body.

  20. Felipe G. Nievinski Says:

    In the spirit of a “crowdsourcing with oversight”, the DOAJ is currently recruiting volunteers to help “processing new journal applications and reviewing existing journals”, see their application form: .

  21. […] negative approach, as opposed to DOAJ’s more constructive whitelisting approach. But under Beall’s sole stewardship it was a disaster, due to his well-known ideological opposition to all open access. So I think […]

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: