It’s now been widely discussed that Jeffrey Beall’s list of predatory and questionable open-access publishers — Beall’s List for short — has suddenly and abruptly gone away. No-one really knows why, but there are rumblings that he has been hit with a legal threat that he doesn’t want to defend.

To get this out of the way: it’s always a bad thing when legal threats make information quietly disappear; to that extent, at least, Beall has my sympathy.

That said — over all, I think making Beall’s List was probably not a good thing to do in the first place, being an essentially negative approach, as opposed to DOAJ’s more constructive whitelisting approach. But under Beall’s sole stewardship it was a disaster, due to his well-known ideological opposition to all open access. So I think it’s a net win that the list is gone.

But, more than that, I would prefer that it not be replaced.

Researchers need to learn the very very basic research skills required to tell a real journal from a fake one. Giving them a blacklist or a whitelist only conceals the real issue, which is that you need those skills if you’re going to be a researcher.

Finally, and I’m sorry if this is harsh, I have very little sympathy with anyone who is caught by a predatory journal. Why would you be so stupid? How can you expect to have a future as a researcher if your critical thinking skills are that lame? Think Check Submit is all the guidance that anyone needs; and frankly much more than people really need.

Here is the only thing you need to know, in order to avoid predatory journals, whether open-access or subscription-based: if you are not already familiar with a journal — because it’s published research you respect, or colleagues who you respect have published in it or are on the editorial board — then do not submit your work to that journal.

It really is that simple.

So what should we do now Beall’s List has gone? Nothing. Don’t replace it. Just teach researchers how to do research. (And supervisors who are not doing that already are not doing their jobs.)


Last night, I did a Twitter interview with Open Access Nigeria (@OpenAccessNG). To make it easy to follow in real time, I created a list whose only members were me and OA Nigeria. But because Twitter lists posts in reverse order, and because each individual tweet is encumbered with so much chrome, it’s rather an awkward way to read a sustained argument.

So here is a transcript of those tweets, only lightly edited. They are in bold; I am in regular font. Enjoy!

So @MikeTaylor Good evening and welcome. Twitterville wants to meet you briefly. Who is Mike Taylor?

In real life, I’m a computer programmer with Index Data, a tiny software house that does a lot of open-source programming. But I’m also a researching scientist — a vertebrate palaeontologist, working on sauropods: the biggest and best of the dinosaurs. Somehow I fit that second career into my evenings and weekends, thanks to a very understanding wife (Hi, Fiona!) …

As of a few years ago, I publish all my dinosaur research open access, and I regret ever having let any of my work go behind paywalls. You can find all my papers online, and read much more about them on the blog that I co-write with Matt Wedel. That blog is called Sauropod Vertebra Picture of the Week, or SV-POW! for short, and it is itself open access (CC By)

Sorry for the long answer, I will try to be more concise with the next question!

Ok @MikeTaylor That’s just great! There’s been so much noise around twitter, the orange colour featuring prominently. What’s that about?

Actually, to be honest, I’m not really up to speed with open-access week (which I think is what the orange is all about). I found a while back that I just can’t be properly on Twitter, otherwise it eats all my time. So these days, rather selfishly, I mostly only use Twitter to say things and get into conversations, rather than to monitor the zeitgeist.

That said, orange got established as the colour of open access a long time ago, and is enshrined in the logo:


In the end I suppose open-access week doesn’t hit my buttons too strongly because I am trying to lead a whole open-access life.

… uh, but thanks for inviting me to do this interview, anyway! :-)

You’re welcome @MikeTaylor. So what is open access?

Open Access, or OA, is the term describing a concept so simple and obvious and naturally right that you’d hardly think it needs a name. It just means making the results of research freely available on the Internet for anyone to read, remix and otherwise use.

You might reasonably ask, why is there any other kind of published research other than open access? And the only answer is, historical inertia. For reasons that seemed to make some kind of sense at the time, the whole research ecosystem has got itself locked into this crazy equilibrium where most published research is locked up where almost no-one can see it, and where even the tiny proportion of people who can read published works aren’t allowed to make much use of them.

So to answer the question: the open-access movement is an attempt to undo this damage, and to make the research world sane.

Are there factors perpetuating this inertia you talked about?

Oh, so many factors perpetuting the inertia. Let me list a few …

  1. Old-school researchers who grew up when it was hard to find papers, and don’t see why young whippersnappers should have it easier
  2. Old-school publishers who have got used to making profits of 30-40% turnover (they get content donated to them, then charge subscriptions)
  3. University administrators who make hiring/promotion/tenure decisions based on which old-school journals a researcher’s papers are in.
  4. Feeble politicians who think it’s important to keep the publishing sector profitable, even at the expense of crippling research.

I’m sure there are plenty of others who I’ve overlooked for the moment. I always say regarding this that there’s plenty of blame to go round.

(This, by the way, is why I called the current situation an equilibrium. It’s stable. Won’t fix itself, and needs to be disturbed.)

So these publishers who put scholarly articles behind paywalls online, do they pay the researchers for publishing their work?


Oh, sorry, please excuse me while I wipe the tears of mirth from my eyes. An academic publisher? Paying an author? Hahahahaha! No.

Not only do academic publishers never pay authors, in many cases they also levy page charges — that is, they charge the authors. So they get paid once by the author, in page-charges, then again by all the libraries that subscribe to read the paywalled papers. Which of course is why, even with their gross inefficiencies, they’re able to make these 30-40% profit margins.

So @MikeTaylor why do many researchers continue to take their work to these restricted access publishers and what can we do about it?

There are a few reasons that play into this together …

Part of it is just habit, especially among more senior researchers who’ve been using the same journals for 20 or 30 years.

But what’s more pernicious is the tendency of academics — and even worse, academic administrators — to evaluate research not by its inherent quality, but by the prestige of the journal that publishes it. It’s just horrifyingly easy for administrators to say “He got three papers out that year, but they were in journals with low Impact Factors.”

Which is wrong-headed on so many levels.

First of all, they should be looking at the work itself, and making an assessment of how well it was done: rigour, clarity, reproducibility. But it’s much easier just to count citations, and say “Oh, this has been cited 50 times, it must be good!” But of course papers are not always cited because they’re good. Sometimes they’re cited precisely because they’re so bad! For example, no doubt the profoundly flawed Arsenic Life paper has been cited many times — by people pointing out its numerous problems.

But wait, it’s much worse than that! Lazy or impatient administrators won’t count how many times a paper has been cited. Instead they will use a surrogate: the Impact Factor (IF), which is a measure not of papers but of journals.

Roughly, the IF measures the average number of citations received by papers that are published in the journal. So at best it’s a measure of journal quality (and a terrible measure of that, too, but let’s not get into that). The real damage is done when the IF is used to evaluate not journals, but the papers that appear in them.

And because that’s so widespread, researchers are often desperate to get their work into journals that have high IFs, even if they’re not OA. So we have an idiot situation where a selfish, rational researcher is best able to advance her career by doing the worst thing for science.

(And BTW, counter-intuitively, the number of citations an individual paper receives is NOT correlated significantly with the journal’s IF. Bjorn Brembs has discussed this extensively, and also shows that IF is correlated with retraction rate. So in many respects the high-IF journals are actually the worst ones you can possibly publish your work in. Yet people feel obliged to.)

*pant* *pant* *pant* OK, I had better stop answering this question, and move on to the next. Sorry to go on so long. (But really! :-) )

This is actually all so enlightening. You just criticised Citation Index along with Impact Factor but OA advocates tend to hold up a higher Citation Index as a reason to publish Open Access. What do you think regarding this?

I think that’s realpolitik. To be honest, I am also kind of pleased that the PLOS journals have pretty good Impact Factors: not because I think the IFs mean anything, but because they make those journals attractive to old-school researchers.

In the same way, it is a well-established fact that open-access articles tend to be cited more than paywalled ones — a lot more, in fact. So in trying to bring people across into the OA world, it makes sense to use helpful facts like these. But they’re not where the focus is.

But the last thing to say about this is that even though raw citation-count is a bad measure of a paper’s quality, it is at least badly measuring the right thing. Evaluating a paper by its journal’s IF is like judging someone by the label of their clothes

So @MikeTaylor Institutions need to stop evaluating research papers based on where they are published? Do you know of any doing it right?

I’m afraid I really don’t know. I’m not privy to how individual institution do things.

All I know is, in some countries (e.g. France) abuse of IF is much more strongly institutionalised. It’s tough for French researchers

What are the various ways researchers can make their work available for free online?

Brilliant, very practical question! There are three main answers. (Sorry, this might go on a bit …)

First, you can post your papers on preprint servers. The best known one is arXiv, which now accepts papers from quite a broad subject range. For example, a preprint of one of the papers I co-wrote with Matt Wedel is freely available on arXiv. There are various preprint servers, including arXiv for physical sciences, bioRxiv, PeerJ Preprints, and SSRN (Social Science Research Network).

You can put your work on a preprint server whatever your subsequent plans are for it — even if (for some reason) it’s going to a paywall. There are only a very few journals left that follow the “Ingelfinger rule” and refuse to publish papers that have been preprinted.

So preprints are option #1. Number 2 is Gold Open Access: publishing in an open-access journal such as PLOS ONE, a BMC journal or eLife. As a matter of principle, I now publish all my own work in open-access journals, and I know lots of other people who do the same — ranging from amateurs like me, via early-career researchers like Erin McKiernan, to lab-leading senior researchers like Michael Eisen.

There are two potential downsides to publishing in an OA journal. One, we already discussed: the OA journals in your field may not be be the most prestigious, so depending on how stupid your administrators are you could be penalised for using an OA journal, even though your work gets cited more than it would have done in a paywalled journal.

The other potential reason some people might want to avoid using an OA journal is because of Article Processing Charges (APC). Because OA publishers have no subscription revenue, one common business model is to charge authors an APC for publishing services instead. APCs can vary wildly, from $0 up to $5000 in the most extreme case (a not-very-open journal run by the AAAS), so they can be offputting.

There are three things to say about APCs.

First, remember that lots of paywalled journals demand page charges, which can cost more!

But second, please know that more than half of all OA journals actually charge no APC at all. They run on different models. For example in my own field, Acta Palaeontologica Polonica and Palaeontologia Electronica are well respected OA journals that charge no APC.

And the third thing is APC waivers. These are very common. Most OA publishers have it as a stated goal that no-one should be prevented from publishing with them by lack of funds for APCs. So for example PLOS will nearly always give a waiver when requested. Likewise Ubiquity, and others.

So there are lots of ways to have your work appear in an OA journal without paying for it to be there.

Anyway, all that was about the second way to make your work open access. #1 was preprints, #2 is “Gold OA” in OA journals …

And #3 is “Green OA”, which means publishing in a paywalled journal, but depositing a copy of the paper in an open repository. The details of how this works can be a bit complicated: different paywall-based publishers allow you to do different things, e.g. it’s common to say “you can deposit your peer-reviewed, accepted but unformatted manuscript, but only after 12 months“.

Opinions vary as to how fair or enforceable such rules are. Some OA advocates prefer Green. Others (including me) prefer Gold. Both are good.

See this SV-POW! post on the practicalities of negotiating Green OA if you’re publishing behind a paywall.

So to summarise:

  1. Deposit preprints
  2. Publish in an OA journal (getting a fee waiver if needed)
  3. Deposit postprints

I’ve written absolutely shedloads on these subjects over the last few years, including this introductory batch. If you only read one of my pieces about OA, make it this one: The parable of the farmers & the Teleporting Duplicator.

Last question – Do restricted access publishers pay remuneration to peer reviewers?

I know of no publisher that pays peer reviewers. But actually I am happy with that. Peer-review is a service to the community. As soon as you encumber it with direct financial incentives, things get more complicated and there’s more potential for Conflict of interest. What I do is, I only perform peer-reviews for open-access journals. And I am happy to put that time/effort in knowing the world will benefit.

And so we bring this edition to a close. We say a big thanks to our special guest @MikeTaylor who’s been totally awesome and instructive.

Thanks, it’s been a privilege.

I hate to keep flogging a dead horse, but since this issue won’t go away I guess I can’t, either.

1. Two years ago, I wrote about how you have to pay to download Elsevier’s “open access” articles. I showed how their open-access articles claimed “all rights reserved”, and how when you use the site’s facilities to ask about giving one electronic copy to a student, the price is £10.88. As I summarised at the time: “Free” means “we take the author’s copyright, all rights are reserved, but you can buy downloads at a 45% discount from what they would otherwise cost.” No-one from Elsevier commented.

2. Eight months ago, Peter Murray-Rust explained that Elsevier charges to read #openaccess articles. He showed how all three of the randomly selected open-access articles he looked at had download fees of $31.50. No-one from Elsevier commented (although see below).

3. A couple of days ago, Peter revisited this issue, and found that Elsevier are still charging THOUSANDS of pounds for CC-BY articles. IMMORAL, UNETHICAL , maybe even ILLEGAL.This time he picked another Elsevier OA article at random, and was quoted £8000 for permission to print 100 copies. The one he looked at says “Open Access” in gold at the top and “All rights reserved” at the bottom. Its “Get rights and content” link takes me to RightsLink, where I was quoted £1.66 to supply a single electronic copy to a student on a course at the University of Bristol:

Screenshot from 2014-03-11 09:40:35

(Why was I quoted a wildly different price from Peter? I don’t know. Could be to do with the different university, or because he proposed printing copies instead of using an electronic one.)

On Peter’s last article, an Elsevier representative commented:

Alicia Wise says:
March 10, 2014 at 4:20 pm
Hi Peter,

As noted in the comment thread to your blog back in August we are improving the clarity of our OA license labelling (eg on ScienceDirect) and metadata feeds (eg to Rightslink). This is work in progress and should be completed by summer. I am working with the internal team to get a more clear understanding of the detailed plan and key milestones, and will tweet about these in due course.

With kind wishes,


Dr Alicia Wise
Director of Access and Policy

(Oddly, I don’t see the referenced comment in the August blog-entry, but perhaps it was on a different article.)

Now here is my problem with this.

First of all, either this is deliberate fraud on Elsevier’s part — charging for the use of something that is free to use — or it’s a bug. Following Hanlon’s razor, I prefer the latter explanation. But assuming it’s a bug, why has it taken two years to address? And why is it still not fixed?

Elsevier, remember, are a company with an annual revenue exceeding £2bn. That’s £2,000,000,000. (Rather pathetically, their site’s link to the most recent annual report is broken, but that’s a different bug for a different day.) Is it unreasonable to expect that two years should be long enough for them to fix a trivial bug?

All that’s necessary is to change the “All rights reserved” message and the “Get rights and content” link to say “This is an open-access article, and is free to re-use”. We know that the necessary metadata is there because of the “Open Access” caption at the top of the article. So speaking from my perspective as a professional software developer of more than thirty years’ standing, this seems like a ten-line fix that should take maybe a man-hour; at most a man-day. A man-day of programmer time would cost Elsevier maybe £500 — that is, 0.000025% of the revenue they’ve taken since this bug was reported two years ago. Is it really too much to ask?

(One can hardly help comparing this performance with that of PeerJ, who have maybe a ten-thousandth of Elsevier’s income and resources. When I reported three bugs to them in a course of a couple of days, they fixed them all with an average report-to-fix time of less than 21 hours.)

Now here’s where it turns sinister.

The PeerJ bugs I mentioned above cost them — not money, directly, but a certain amount of reputation. By fixing them quickly, they fixed that reputation damage (and indeed gained reputation by responding so quickly). By contrast, the Elsevier bug we’re discussing here doesn’t cost them anything. It makes them money, by misleading people into paying for permissions that they already have. In short, not fixing this bug is making money for Elsevier. It’s hard not to wonder: would it have remained unfixed for two years if it was costing them money?

But instead of a rush to fix the bug, we have this kind of thing:

I find that very hard to accept. However complex your publishing platform is, however many different modules interoperate, however much legacy code there is — it’s not that hard to take the conditional that emits “Open Access” in gold at the top of the article, and make the same test in the other relevant places.

As John Mark Ockerbloom observes:

Come on, Elsevier. You’re better than this. Step up. Get this done.

Update (21st March 2014)

Ten days layer, Elsevier have finally responded. To give credit where it’s due, it’s actually pretty good: it notes how many customers made payments they needn’t have made (about 50), how much they paid in total (about $4000) and says that they are actively refunding these payments.

It would be have been nice, mind you, had this statement contained an actual apology: the words “sorry”, “regret” and “apologise” are all notably absent.

And I remain baffled that the answer to “So when will this all be reliable?” is “by the summer of 2014”. As noted above, the pages in question already have the information that the articles are open access, as noted in the gold “Open Access” text at top right of the pages. Why it’s going to take several more months to use that information elsewhere in the same pages is a mystery to me.

Update 2 (24th March 2014)

As noted by Alicia in a comment below, Elsevier employee Chris Shillum has posted a long comment on Elsevier’s response, explaining in more detail what the technical issues are. Unfortunately there seems to be no way to link directly to the comment, but it’s the fifth one.


It’s now widely understood among researchers that the impact factor (IF) is a statistically illiterate measure of the quality of a paper. Unfortunately, it’s not yet universally understood among administrators, who in many places continue to judge authors on the impact factors of the journals they publish in. They presumably do this on the assumption that impact factor is a proxy for, or predictor of, citation count, which is turn is assumed to correlate with influence.

As shown by Lozano et al. (2012), the correlation between IF and citations is in fact very weak — r2 is about 0.2 — and has been progressively weakening since the dawn of the Internet era and the consequent decoupling of papers from the physical journal that they appear in. This is a counter-intuitive finding: given that the impact factor is calculated from citation counts you’d expect it to correlate much more strongly. But the enormous skew of citation rates towards a few big winners renders the average used by the IF meaningless.

To bring this home, I plotted my own personal impact-factor/citation-count graph. I used Google Scholar’s citation counts of my articles, which recognises 17 of my papers; then I looked up the impact factors of the venues they appeared in, plotted citation count against impact factor, and calculated a best-fit line through my data-points. Here’s the result (taken from a slide in my Berlin 11 satellite conference talk):


I was delighted to see that the regression slope is actually negative: in my case at least, the higher the impact factor of the venue I publish in, the fewer citations I get.

There are a few things worth unpacking on that graph.

First, note the proud cluster on the left margin: publications in venues with impact factor zero (i.e. no impact factor at all). These include papers in new journals like PeerJ, in perfectly respectable established journals like PaleoBios, edited-volume chapters, papers in conference proceedings, and an arXiv preprint.

My most-cited paper, by some distance, is Head and neck posture in sauropod dinosaurs inferred from extant animals (Taylor et al. 2009, a collaboration between all three SV-POW!sketeers). That appeared in Acta Palaeontologia Polonica, a very well-respected journal in the palaeontology community but which has a modest impact factor of 1.58.

My next most-cited paper, the Brachiosaurus revision (Taylor 2009), is in the Journal of Vertebrate Palaeontology — unquestionably the flagship journal of our discipline, despite its also unspectacular impact factor of 2.21. (For what it’s worth, I seem to recall it was about half that when my paper came out.)

In fact, none of my publications have appeared in venues with an impact factor greater than 2.21, with one trifling exception. That is what Andy Farke, Matt and I ironically refer to as our Nature monograph (Farke et al. 2009). It’s a 250-word letter to the editor on the subject of the Open Dinosaur Project. (It’ a subject that we now find profoundly embarrassing given how dreadfully slowly the project has progressed.)

Google Scholar says that our Nature note has been cited just once. But the truth is even better: that one citation is in fact from an in-prep manuscript that Google has dug up prematurely — one that we ourselves put on Google Docs, as part of the slooow progress of the Open Dinosaur Project. Remove that, and our Nature note has been cited exactly zero times. I am very proud of that record, and will try to preserve it by persuading Andy and Matt to remove the citation from the in-prep paper before we submit. (And please, folks: don’t spoil my record by citing it in your own work!)

What does all this mean? Admittedly, not much. It’s anecdote rather than data, and I’m posting it more because it amuses me than because it’s particularly persuasive. In fact if you remove the anomalous data point that is our Nature monograph, the slope becomes positive — although it’s basically meaningless, given that all my publications cluster in the 0–2.21 range. But then that’s the point: pretty much any data based on impact factors is meaningless.



I was astonished yesterday to read Understanding and addressing research misconduct, written by Linda Lavelle, Elsevier’s General Counsel, and apparently a specialist in publication ethics:

While uncredited text constitutes copyright infringement (plagiarism) in most cases, it is not copyright infringement to use the ideas of another. The amount of text that constitutes plagiarism versus ‘fair use’ is also uncertain — under the copyright law, this is a multi-prong test.

So here (right in the first paragraph of Lavelle’s article) we see copyright infringement equated with plagiarism. And then, for good measure, the confusion is hammered home by the depiction of fair use (a defence against accusations of copyright violation) depicted as a defence against accusations of plagiarism.

This is flatly wrong. Plagiarism and copyright violation are not the same thing. Not even close.

First, plagiarism is a violation of academic norms but not illegal; copyright violation is illegal, but in truth pretty ubiquitous in academia. (Where did you get that PDF?)

Second, plagiarism is an offence against the author, while copyright violation is an offence against the copyright holder. In traditional academic publishing, they are usually not the same person, due to the ubiquity of copyright transfer agreements (CTAs).

Third, plagiarism applies when ideas are copied, whereas copyright violation occurs only when a specific fixed expression (e.g. sequence of words) is copied.

Fourth, avoiding plagiarism is about properly apportioning intellectual credit, whereas copyright is about maintaining revenue streams.

Let’s consider four cases (with good outcomes is green and bad ones in red):

  1. I copy big chunks of Jeff Wilson’s (2002) sauropod phylogeny paper (which is copyright the Linnean Society of London) and paste it into my own new paper without attribution. This is both plagiarism against Wilson and copyright violation against the Linnean Society.
  2. I copy big chunks of Wilson’s paper and paste it into mine, attributing it to him. This is not plagiarism, but copyright violation against the Linnean Society.
  3. I copy big chunks of Rigg’s (1904) Brachiosaurus monograph (which is out of copyright and in the public domain) into my own new paper without attribution. This is plagiarism against Riggs, but not copyright violation.
  4. I copy big chunks of Rigg’s paper and paste it into mine with attribution. This is neither plagiarism nor copyright violation.

Plagiarism is about the failure to properly attribute the authorship of copied material (whether copies of ideas or of text or images). Copyright violation is about failure to pay for the use of the material.

Which of the two issues you care more about will depend on whether you’re in a situation where intellectual credit or money is more important — in other words, whether you’re an author or a copyright holder. For this reason, researchers tend to care deeply when someone plagiarises their work but to be perfectly happy for people to violate copyright by distributing copies of their papers. Whereas publishers, who have no authorship contribution to defend, care deeply about copyright violation.

One of the great things about the Creative Commons Attribution Licence (CC By) is that it effectively makes plagiarism illegal. It requires that attribution be maintained as a condition of the licence; so if attribution is absent, the licence does not pertain; which means the plagiariser’s use of the work is not covered by it. And that means it’s copyright violation. It’s a neat bit of legal ju-jitsu.


  • Riggs, Elmer S. 1904. Structure and relationships of opisthocoelian dinosaurs. Part II, the Brachiosauridae. Field Columbian Museum, Geological Series 2:229-247, plus plates LXXI-LXXV.
  • Wilson, Jeffrey A. 2002. Sauropod dinosaur phylogeny: critique and cladistic analysis. Zoological Journal of the Linnean Society 136:217-276.