Questions concerning Open Access research

November 16, 2012

Last night, I got a message from Joseph Kraus, the Collections & E-Resources Analysis Librarian at Penrose Library, University of Denver. He’s asking several open-access advocates (of which I am one) to answer a set of seven questions for a study that will investigate institutional activities and personal opinions concerning open access resources. The title of the study will be Comparing scholarly communication practices and policies between the United States (US) and United Kingdom (UK) stakeholders, and it will be submitted to a BOAI-compliant open-access journal. [See update below]

With Joe’s consent, I am posting his questions here, along with the answers that I gave. It was an interesting process to go through, and left helped me to clarify my own thoughts and feelings on some of these issues.

1) The Finch report and the RCUK report recently came out. These reports have taken stances concerning green and gold open access in the UK. What are your thoughts on the issue of green vs gold open access policies?

Well, the most important point to make is that it really doesn’t matter. Green and Gold OA are not two different things; they are just two complementary strategies to achieve the same goal. So whether we get there by the Green or Gold route is much less important than that we get there. I care much more about full BOAI compliance (i.e. freedom to reuse, not just to read) than I do about Green vs. Gold.

It’s also worth noting that the Finch report doesn’t really take a stance on which route is better — instead, it ignores Green completely, and just doesn’t comment on it one way or the other.

I suppose in principle I slightly prefer Gold, because that way there is only one definitive version of the article. But publishers have a lot of work to do to persuade me that their contribution (as opposed to the editors’ and reviewers’ freely donated contributions) are worth £2000 a pop, or even $1350.

2) PLOS ONE is a well-known large open access journal that covers a broad range of disciplines. Because it has been deemed successful, other publishers have also proposed or started similar journals. What is your opinion of this new type of publication outlet?

PLOS ONE is the single greatest thing to have happened to scholarly publication. Its approach to peer-review is precisely correct: if a submission is good science, it gets published, period. The journal makes no attempt to judge the paper’s likely impact — which is pure guesswork anyway. It lets the scientific community decide, which is exactly as it should be.

(This approach has sometimes been called “peer-review lite“. That is exactly wrong. The peer-review at PLOS ONE is as harsh as it is anywhere. What’s lite, and indeed completely absent, is selection by trendiness and sexiness. Which is exactly as it should be. We are scientists, not marketeers.)

So I am keen to see many other venues with the same approach. That’s important because, as good as PLOS is, we don’t want to see a monoculture develop, not even a PLOS monoculture.

3) Harvard University has recommended to their faculty to “consider submitting articles to open-access journals, or to ones that have reasonable, sustainable subscription costs; move prestige to open access.” The concept of “moving prestige to open access” is an interesting statement to the Harvard faculty authors and researchers. What do you think of this statement?

First, let me take a moment to (A) commend Harvard for taking this initiative, but (B) deplore the very weak wording “recommended … to consider”, rather than imposing an actual mandate. What they’ve done is good; but it could and should have been so much better.

The idea of “moving prestige to open access” is exactly right. During the early days of the OA movement there was a completely groundless idea — propagated by paywall publishers, I presume — that OA venues were somehow inferior to paywalled ones. That idiot notion seems to have died now, but we can and should and must go further — we need to convey to job-search, promotion, tenure and granting committees that open-access publications ought to count for much more than paywalled ones.

The bottom line is, if a paper is behind a paywall, it’s not really published. The academic community is less able to benefit from it; that is even more true of the broader population, which in most cases funded the work. This is the 21st century. By now, the idea of letting your paper be locked up where no-one can see it should be a shameful one, the sort of thing you admit to only when cornered. Harvard’s statement is a good step towards reconfiguring scholarly norms in this way.

4) University presses and many societies are concerned about how the open access movement will affect their financial bottom line. What concerns do you have about open access and society publications?

Without doubt, there is an issue here — it’s the one potential downside of the shift to OA that bothers me.

That said, we do have to ask what scholarly societies are for. In some cases — the ACS springs to mind — we are seeing the tail wagging the dog: the society sometimes talks and acts as though the discipline exists for its benefit rather than vice versa. That won’t do. Societies have to benefit their disciplines, otherwise they are a waste of time, energy and money. And unquestionably the best way they can benefit the science they are there to serve is by releasing research to the world.

So I hope that societies can make the OA transition in a way that allows them continue to do the things they’re doing. But if it comes to a choice between the society thriving at the science’s expense or vice versa, then the science has to be the winner every time.

5) AltMetrics is gathering steam as an additional method for faculty to determine the impact of their work. Do you plan to take advantage of this data for either your work, or for the benefit of your institution or department?

At this early stage in the story of AltMetrics, I am not too sure what I am supposed to actually do with it, so I am really at the wait-and-see stage.

The one thing I feel passionately about in this area — and it’s so obvious it seems stupid even to say — is can we please measure the right thing? Using impact factors to evaluate journals is statistically illiterate, but it’s at least what IFs were intended for, however flawed they may be. Using IFs to judge a paper by what journal it appears in is idiotic. If you have to have a number to judge the paper by, then use its own citation count if you must — not the citation counts of other papers that appeared in the same journal. And judging a researcher by the IFs of the journals that her papers appeared in transcends the merely idiotic and achieves the level of moronic.

If AltMetrics bring an end to this astonishingly persistent practice, that will be enough of a win to justify all the work being done.

6) The Research Excellence Framework (REF) in the UK notes: “No sub-panel will make any use of journal impact factors, rankings, lists or the perceived standing of publishers in assessing the quality of research outputs.” While this is a valid statement for UK based research evaluation, it would be impossible to get a majority of academic tenure and promotion committees throughout the United States to agree to a similar statement in the near future. Since the UK has the REF, and the US does not, how much is this holding back the US from adopting greater OA policies at various institutions?

Kudos to the REF for making this statement. The Wellcome Trust has said something similar, and I would love to see other funding bodies (and universities and departments) publicly saying the same.

If US institutions are using IFs to evaluate researchers, then … I am trying to find a polite way to express the depth of my contempt for this damaging and incompetent behaviour, but I am struggling to do it. At the very least, it will contribute to eroding the US’s position in the academic world.

Really. It’s exactly as rational as high-school kids judging their classmates by the label of the clothes they wear. We’re scientists. We’re better than that.

JUST STOP IT, AMERICA!

(You too, France.)

7) Is there anything else you would like to say concerning open access publishing?

I think we’ve just about covered it :-)

Update (10 June 2015)

For some reason, I have only now registered that the article was published in F1000 Research as Cash, carrots, and sticks: Open Access incentives for researchers (Kraus 2014).

18 Responses to “Questions concerning Open Access research”

  1. cc Says:

    I just want to say thank you for continuing to dog this issue. Last year, I helped a friend find sources for her dissertation. She was attending a small college in the UK, and her college simply didn’t have the funds to give students access to all the journals in her field.

    Between me being in America and her in Britain, we managed to find enough sources for her to write a solid paper with good citations, but why do these barriers exist? And how, exactly, is science being served by locking research away from those who are entering the field?

  2. Mike Taylor Says:

    Thanks, cc, this kind of encouragement is really appreciated. Sometimes I feel as though I am shouting at either an empty room or a group of people who are really about ready for me to shut up. Feedback like yours reassures me that’s not true (or at least not the whole truth).

  3. M.E. Says:

    >It’s exactly as rational as high-school kids judging their >classmates by the label of the clothes they wear.

    I don’t think you’ve really thought through why people historically used the journal IF to assess the quality of a paper.

    How would you design a system to assess the quality of a paper at or soon after publication? Article metrics like citations or usage won’t help, since they take time to develop. If you need an assessment faster, then perhaps you might hire some experts to read the paper and tell you what they think.

    So, new problem: how do you know your experts are any good? Well, perhaps we could measure their track record at assessing other papers in the past?

    That, of course, is exactly the impact factor – it’s not a measure of the articles, it’s a measure of the quality of the editorial board.

    At some level you must know this – but then I think calling the process as rational as judging teenagers by their clothing brands is an unfair characterization.

  4. Mike Taylor Says:

    Hi, M.E., thanks for commenting.

    How would you design a system to assess the quality of a paper at or soon after publication? Article metrics like citations or usage won’t help, since they take time to develop.

    Yes. And no amount of plucking random four-significant-digit numbers out of the air is going to change that. Posterity decides the quality and value of a paper. Number of citations is a half-decent numerical proxy for the verdict of posterity. So if we insist on approximating some kind of value judgement the moment a paper is published, we want to come up with something that correlates well with eventual citation count.

    If you need an assessment faster, then perhaps you might hire some experts to read the paper and tell you what they think.

    Yes indeed — having people actually read the paper would be much better. That’s how music and movies are evaluated, after all: someone actually listens to the music, watches the movie, and writes about how good or bad it is.

    Ironically, we do in fact do this with academic papers; but then we take the detailed comments and critiques of the reviewers and boil them down to a single bit of information (the Pass/Fail verdict) and throw them away. Not publishing peer-reviews alongside papers is a startlingly wasteful convention.

    So, new problem: how do you know your experts are any good?

    The same way editors choose peer-reviewers: by knowing who has relevant expertise and is known to do good work in the field.

    Well, perhaps we could measure their track record at assessing other papers in the past? That, of course, is exactly the impact factor – it’s not a measure of the articles, it’s a measure of the quality of the editorial board.

    I beg your pardon, that but is not at all what the impact factor is. It’s a measure of how successful the journal has been in a popularity contest.

    If it was any use at all as a measurement of a paper’s worth, then it would act a decent proxy from the eventual worth of that paper as shown by citations. In fact, it does no such thing: counter-intuitive though it may seem, the correlation between a paper’s citation rate and the impact factor of the journal it was published in is extremely weak, and probably not statistically significant.

    (On the other hand, impact factor does correlate strongly with retraction rate.)

    I think calling the process as rational as judging teenagers by their clothing brands is an unfair characterization.

    Not at all. In both cases, you’re judging something not by its intrinsic merits, but by what trendy brand it’s associated with.


  5. “I beg your pardon, that but is not at all what the impact factor is.”

    I couldn’t agree more. Are we to think Science and Nature have the best editorial boards simply because they reject more papers? If anything, the restricted length and often exaggerated hype of papers in these journals makes them of lower quality than average in my opinion.

  6. M.E. Says:

    I don’t want to defend IF since I agree, it is a flawed metric. There are better citation based metrics and I wish we used those instead of a black box commercial metric.

    My point was only that it’s not *irrational* to consider the journal to be an indicator of the quality of the paper, even if you might wish for a better system.

    Take your comment about “trendy brand”, for example – you say it disparagingly, I know, but really it’s the same mechanism that we use to rank universities themselves – why does everyone want to get into Harvard? And would you say that going to Harvard says nothing about the quality of people coming out of it?

    In reality, of course, Harvard isn’t that special – the people are good mainly because it only selects the best to begin with. And plenty of fine people never get in and go on to do great things. All true. Still, it remains also true that *on average* the quality of a person who went to Harvard is higher than a mid or low tier university.

    You can fairly criticize this system – for Harvard or for Nature – but I still think calling it irrational goes too far.


  7. M.E. wrote: “My point was only that it’s not *irrational* to consider the journal to be an indicator of the quality of the paper, even if you might wish for a better system.”

    What is rational about knowing that there is no empirical evidence for journal rank and then still using it?
    I’m not too sure about Harvard and other schools being ‘better’ in any way,they may actually be. I agree with you in one point: As long as it’s not common knowledge that the perceived rank is not an illusion, it is indeed no irrational to use the ranking in some way or another. However., journal rank is an illusion as there is no empirical evidence. Journal rank is like astrology, dowsing or homeopathy: using journal rank is indeed irrational.

  8. MRR Says:

    I tend to agree with ME here: IF is statistically flawed, but the principle of judging a paper by its journal is not necessarily irrational.

    Also, about shaming academic systems which rely on IF for funding or promotion (as my Faculty does): it’s not good, but the main alternatives seem to be promoting by age, spreading funds thin among productive and unproductive researchers equally, and old boys networks. Any metric will be inperfect, but the absence of a metric is in my experience pretty bad.

    To re-iterate: I do agree that IF is a bad measure (it should at least be a median, not a mean).

    Also, thank you for your blogging and tweeting, it’s really useful to the community. :-)

  9. Mike Taylor Says:

    MRR, there are a few things going on here, which we need to be careful not to conflate.

    First, there is the question of whether IF is a good measurement of a journal’s quality. I think everyone who’s looked into it at all is in agreement that it is not. At the very least, as you say, a median citation-count would be much more meaningful than a mean; and the negotiability of the denominator and irreproducibility of the results should be enough to make IFs abhorrent to any scientist.

    But second, and this is even more important, is that the practice of assessing one thing by measuring another is (I am sticking to my guns here) irrational. Even if impact factor was a perfect measure of journal quality, it would still be a wholly useless way of measuring article quality or author quality. To see this, you need only look at your own publications: plot number of citations against impact-factor of the journal, and put a best-fit line through the result. (I should do this and post the result on SV-POW!.)

    Finally, thanks for your kind words about this blog!


  10. MRR wrote: “Any metric will be imperfect, but the absence of a metric is in my experience pretty bad.”
    Clearly, we don’t want old boys networks replacing themselves with their clones. But to set this as the alternative is a false dichotomy. It may be a risk, but far from inevitable. Even today, for those so inclined, there are plenty of measures that judge the articles themselves and not their containers, so there is no reason to use IF.
    Statistically, you could probably make up your own rank by, say, ranking the journals alphabetically and you wouldn’t do too much worse than IF.
    So using journal rank for anything really is irrational, especially when there are rational alternatives.

  11. MRR Says:

    @Bjoern: I mostly agree with you, but as pointed out by ME, you often need a metric for cases where there was not enough time to accumulate citations or other measures of individual success of an article.

    Also, I strongly suspect that all the altmetrics we keep hearing about only sound cool because they’re new, and are full of biases which also make them poorly adapted to judging science. In the end, the only way to judge science is to read and understand each paper, and the sad fact is that we don’t have time to do this on a sufficient scale for, e.g., funding or promotions.

  12. brembs Says:

    @MRR: good points. See the post I wrote last Thursday, which happens to be exactly on this very issue :-)
    http://bjoern.brembs.net/comment-n881.html

  13. Mike Taylor Says:

    MRR states:

    You often need a metric for cases where there was not enough time to accumulate citations or other measures of individual success of an article.you often need a metric for cases where there was not enough time to accumulate citations or other measures of individual success of an article.

    You may need such a metric, but you can’t have it. Such a metric simply doesn’t exist, because you can’t measure what hasn’t happened yet. You can make up a number, sure, if your job is to fill in a little box on a form. But you would literally do just as well to throw a dice and write down the number that comes up. Because impact factor does not predict citation count.

    Much better to have no number at all than a meaningless one.

  14. MRR Says:

    IF does not predict citation count, but I don’t care much for citation count anyway.

    Inside a given field, IF correlates pretty well with my experience of how difficult it is to publish in a journal, how demanding the reviewers are, and where you’ll send the work of your best student vs. some paper you’re trying to get rid of. That’s many people’s experience, and that why IF is staying in use, although from a statistical perspective it’s horrible.

    Now you’ll say that PLOS One is breaking this, and I’ll agree to a point. It’s breaking the difference between IF 3 and 4, not between IF 4 and 14. I’m happy to contribute to this revolution. ;-)

    Finally, I think that there is a self-fulfilling prophecy aspect which is not that bad. If everyone tries to send their best work to the same place, that place has the best work. It’s almost not important how that place was selected at the start. Like the Harvard example of ME.

  15. Mike Taylor Says:

    Think about what you’re saying. Why would difficulty of publishing in a journal be an indicator of anything more than difficulty?

    As for a self-fulfilling prophecy of people sending their “best” work to the most exclusive venues: if that is indeed happening (rather than IF-victims assuming that what appears in the most exclusive venues is the best work) then it’s no service to science. What we get in the high-IF, high-rejection journals are emasculated, shrivelled remains of proper papers, sliced and diced to fit length limits that serve the journal rather than the science. The idea that haviung that happen to one’s research is the best outcome is disastrous.

    (BTW., we know that what IF does predict, with statistical significance, is retraction rate.)

  16. brembs Says:

    MRR wrote: “Inside a given field, IF correlates pretty well with my experience” see, that’s exactly what I mean when I say JR is like dowsing or homeopathy: this correlation only exists in your head and goes away when you apply scientific methods. It’s largely what is called ‘confirmation bias’.
    Thanks for making this so clear. I had the exact same correlation in my head and when I checked for confirmation bias, I realized that this correlation was a figment of my imagination. Now, this correlation doesn’t exist any more than dowsing or homeopathy or astrology work.

    P.S.: There is so much more than just citations: expert opinion, methodology, retractions, effect sizes, etc.pp. No matter what parameter you look at, you can’t find any consistent measure where journal rank would stand out:
    https://docs.google.com/document/d/1VF_jAcDyxdxqH9QHMJX9g4JH5L4R-9r6VSjc7Gwb8ig/edit

  17. MRR Says:

    My last comment for now or we risk running in circles.

    The correlations between IF and other things are correlations with things that can be measured, and these are not usually the most relevant things in my opinion. Nowhere in any correlation is the quality of the data and controls, the expertise of the reviewers, the novelty relative to other papers in the field (and yes Nature/Science mis-use this concept, but it’s still relevant a lot of the time), the quality of the writing, etc etc.


  18. […] we’ve noted here a couple of times before, the REF (Research Excellence Framework) is explicit in disavowing impact factors […]


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: