The opening remarks by the hosts of conferences are usually highly forgettable, a courtesy platform offered to a high-ranking academic who has nothing to say about the conference’s subject. NOT THIS TIME!

This is the opening address of APE 2018, the Academic Publishing in Europe conference. The remarks are by Martin Grötschel, who as well as being president of the host institution, the Berlin Brandenburg Academy of Sciences and Humanities, is a 25-year veteran of open-access campaigning. and a member of the German DEAL negotiating team.

Here are some choice quotes:

1m50s: “I have always been aware of the significant imbalance and the fundamental divisions of the academic publication market. Being in the DEAL negotiation team, this became even more apparent …”

2m04s: “On the side of the scientists there is an atomistic market where, up to now and unfortunately, many of the actors play without having any clue about the economic consequences of their activities.”

2m22s: “In Germany and a few other countries where buyer alliances have been organised, they are, as expected, immediately accused of forming monopolies and they are taken to court — fortunately, without success, and with the result of strengthening the alliances.”

2m38s: “On the publishers’ side there is a very small number of huge publication enterprises with very smart marketing people. They totally dominate the market, produce grotesque profits, and amazingly manage to pretend to be the Good Samaritans of the sciences.”

2m27s: “And there are the tiny [publishers …] tentatively observed by many delegates of the big players, who are letting them play the game, ready to swallow them if an opportunity comes up.”

3m18s: “When you, the small publishers, discuss with the representatives of the big guys, these are most likely very friendly to you. But […] when it comes to discussing system changes, when the arguments get tight, the smiles disappear and the greed begins to gleam.”

3m42s: “You will hear in words, and not implicitly, that the small academic publishers are considered to be just round-off errors, tolerated for another while, irrelevant for the world-wide scientific publishing market, and having no influence at all.”

4m00s: “One big publisher stated: if your country stops subscribing to our journals, science in your country will be set back significantly. I responded […] it is interesting to hear such a threat from a producer of envelopes who does not have any idea of the contents.”

4m39s: “Will the small publishers side with the intentions of the scholars? Or will you try to copy the move towards becoming a packaging industry that exploits the volunteer work of scientists and results financed by public funding?”

5m55: “I do know, though, that the major publishers are verbally agreeing [to low-cost Gold #OpenAccess] , but not acting in this direction all, simply to maintain their huge profit margins.”

6m06s: “In a market economy, no-one can argue against profit maximisation [of barrier-based scholarly publishers]. But one is also allowed to act against it. The danger may be really disruptive, instead of smooth moves in the development of the academic publishing market.”

6:42: “You may not have enjoyed my somewhat unusual words of welcome, but I do hope that you enjoy this year’s APE conference.”

It’s just beautiful to hear someone in such a senior position, given such a platform, using it say so very clearly what we’re all thinking. (And as a side-note: I’m constantly amazed that so many advocates are so clear, emphatic and rhetorically powerful in their second, or sometimes third, language. Humbling.)

As RLUK’s David Prosser noted: “I bet this wasn’t what the conference organisers were expecting. A fabulous, hard-hitting polemic on big publishers #OA.”

 

 


Note. This post is adapted from a thread of tweets that I posted excerpting the video.

Advertisements

This morning, I was invited to review a paper — one very relevant to my interests — for a non-open-access journal owned by one of the large commercial barrier-based publishers. This has happened to me several times now; and I declined, as I have done ever since 2011.

I know this path is not for everyone. But for anybody who feels similarly to how I do but can’t quite think what to say to the handling editor and corresponding author, here are the messages that I sent to both.

First, to the handling editor (who in this case also happened to be the Editor-in-Chief):

Dear EDITOR NAME,

I’m writing to apologise for turning down your request that I review NAME OF PAPER. The reason is that I am wholly committed to the free availability of all scholarly research to everyone, and I cannot in good conscience give my time and expertise to a paper that is destined to end up behind PUBLISHER‘s paywall.

I know this can sound very self-righteous — I am sorry if it appears that way. I also recognise that there is serious collateral damage from limiting my reviewing efforts to open-access journals. My judgement is that, in the long term, that regrettable damage is a price worth paying, and I laid out my reasons a few years ago in this blog post: https://svpow.com/2011/10/17/collateral-damage-of-the-non-open-reviewing-boycott/

I hope you will understand my reasons for pushing hard towards an open-access future for all our scholarship; and I even hope that you might reconsider the time you yourself dedicate to PUBLISHER‘s journal, and wonder whether it might be more fruitfully spent in helping an open-access palaeontology journal to improve its profile and reputation.

Yours, with best wishes,

Mike.

Then, to the corresponding author, a similar message:

Dear AUTHOR NAME,

I was invited by JOURNAL to review your new manuscript NAME OF PAPER. I’m writing to apologise for turning down that request, and to explain why I did so.

The reason is that I am wholly committed to the free availability of all scholarly research to everyone, and I cannot in good conscience give my time and expertise to a paper that is destined to end up behind PUBLISHER‘s paywall.

I know this can sound very self-righteous — I am sorry if it appears that way. I also recognise that there is serious collateral damage from limiting my reviewing efforts to open-access journals. My judgement is that, in the long term, that regrettable damage is a price worth paying, and I laid out my reasons a few years ago in this blog post: https://svpow.com/2011/10/17/collateral-damage-of-the-non-open-reviewing-boycott/

I hope you will understand my reasons for pushing hard towards an open-access future for all our scholarship; and I even hope that you might consider withdrawing your work from JOURNAL, and instead submitting to one of the many fine open-access journals in our field. (Examples: Palaeontologia Electronica, Acta Palaeontologica Polonica, PLOS ONE, PeerJ, PalArch’s Journal of Vertebrate Paleontology, Royal Society Open Science.)

Yours, with apologies for the inconvenience and my best wishes,

Mike.

Anyone is welcome to use these messages as templates or inspiration if they are useful. Absolutely no rights reserved.

The previous post (Every attempt to manage academia makes it worse) has been a surprise hit, and is now by far the most-read post in this blog’s nearly-ten-year history. It evidently struck a chord with a lot of people, and I’ve been surprised — amazed, really — at how nearly unanimously people have agreed with it, both in the comments here and on Twitter.

But I was brought up short by this tweet from Thomas Koenig:

That is the question, isn’t it? Why do we keep doing this?

I don’t know enough about the history of academia to discuss the specific route we took to the place we now find ourselves in. (If others do, I’d be fascinated to hear.) But I think we can fruitfully speculate on the underlying problem.

Let’s start with the famous true story of the Hanoi rat epidemic of 1902. In a town overrun by rats, the authorities tried to reduce the population by offering a bounty on rat tails. Enterprising members of the populace responded by catching live rats, cutting off their tails to collect the bounty, then releasing the rats to breed, so more tails would be available in future. Some people even took to breeding rats for their tails.

Why did this go wrong? For one very simple reason: because the measure optimised for was not the one that mattered. What the authorities wanted to do was reduce the number of rats in Hanoi. For reasons that we will come to shortly, the proxy that they provided an incentive for was the number of rat tails collected. These are not the same thing — optimising for the latter did not help the former.

The badness of the proxy measure applies in two ways.

First, consider those who caught rats, cut their tails off and released them. They stand as counter-examples to the assumption that harvesting a rat-tail is equivalent to killing the rat. The proxy was bad because it assumed a false equivalence. It was possible to satisfy the proxy without advancing the actual goal.

Second, consider those who bred rats for their tails. They stand as counter-examples to the assumption that killing a rat is equivalent to decreasing the total number of live rats. Worse, if the breeders released their de-tailed captive-bred progeny into the city, their harvests of tails not only didn’t represent any decrease in the feral population, they represented an increase. So the proxy was worse than neutral because satisfying it could actively harm the actual goal.

So far, so analogous to the perverse academic incentives we looked at last time. Where this gets really interesting is when we consider why the Hanoi authorities chose such a terribly counter-productive proxy for their real goal. Recall their object was to reduce the feral rat population. There were two problems with that goal.

First, the feral rat population is hard to measure. It’s so much easier to measure the number of tails people hand in. A metric is seductive if it’s easy to measure. In the same way, it’s appealing to look for your dropped car-keys under the street-lamp, where the light is good, rather than over in the darkness where you dropped them. But it’s equally futile.

Second — and this is crucial — it’s hard to properly reward people for reducing the feral rat population because you can’t tell who has done what. If an upstanding citizen leaves poison in the sewers and kills a thousand rats, there’s no way to know what he has achieved, and to reward him for it. The rat-tail proxy is appealing because it’s easy to reward.

The application of all this to academia is pretty obvious.

First the things we really care about are hard to measure. The reason we do science — or, at least, the reason societies fund science — is to achieve breakthroughs that benefit society. That means important new insights, findings that enable new technology, ways of creating new medicines, and so on. But all these things take time to happen. It’s difficult to look at what a lab is doing now and say “Yes, this will yield valuable results in twenty years”. Yet that may be what is required: trying to evaluate it using a proxy of how many papers it gets into high-IF journals this year will most certainly mitigate against its doing careful work with long-term goals.

Second we have no good way to reward the right individuals or labs. What we as a society care about is the advance of science as a whole. We want to reward the people and groups whose work contributes to the global project of science — but those are not necessarily the people who have found ways to shine under the present system of rewards: publishing lots of papers, shooting for the high-IF journals, skimping on sample-sizes to get spectacular results, searching through big data-sets for whatever correlations they can find, and so on.

In fact, when a scientist who is optimising for what gets rewarded slices up a study into multiple small papers, each with a single sensational result, and shops them around Science and Nature, all they are really doing is breeding rats.

If we want people to stop behaving this way, we need to stop rewarding them for it. (Side-effect: when people are rewarded for bad behaviour, people who behave well get penalised, lose heart, and leave the field. They lose out, and so does society.)

Q. “Well, that’s great, Mike. What do you suggest?”

A. Ah, ha ha, I’d been hoping you wouldn’t bring that up.

No-will be surprised to hear that I don’t have a silver bullet. But I think the place to start is by being very aware of the pitfalls of the kinds of metrics that managers (including us, when wearing certain hats) like to use. Managers want metrics that are easy to calculate, easy to understand, and quick to yield a value. That’s why articles are judged by the impact factor of the journal they appear in: the calculation of the article’s worth is easy (copy the journal’s IF out of Wikipedia); it’s easy to understand (or, at least, it’s easy for people to think they understand what an IF is); and best of all, it’s available immediately. No need for any of that tedious waiting around five years to see how often the article is cited, or waiting ten years to see what impact it has on the development of the field.

Wise managers (and again, that means us when wearing certain hats) will face up to the unwelcome fact that metrics with these desirable properties are almost always worse than useless. Coming up with better metrics, if we’re determined to use metrics at all, is real work and will require an enormous educational effort.

One thing we can usefully do, whenever considering a proposed metric, is actively consider how it can and will be hacked. Black-hat it. Invest a day imagining you are a rational, selfish researcher in a regimen that uses the metric, and plan how you’re going to exploit it to give yourself the best possible score. Now consider whether the course of action you mapped out is one that will benefit the field and society. If not, dump the metric and start again.

Q. “Are you saying we should get rid of metrics completely?”

A. Not yet; but I’m open to the possibility.

Given metrics’ terrible track-record of hackability, I think we’re now at the stage where the null hypothesis should be that any metric will make things worse. There may well be exceptions, but the burden of proof should be on those who want to use them: they must show that they will help, not just assume that they will.

And what if we find that every metric makes things worse? Then the only rational thing to do would be not to use any metrics at all. Some managers will hate this, because their jobs depend on putting numbers into boxes and adding them up. But we’re talking about the progress of research to benefit society, here.

We have to go where the evidence leads. Dammit, Jim, we’re scientists.

I’ve been on Twitter since April 2011 — nearly six years. A few weeks ago, for the first time, something I tweeted broke the thousand-retweets barrier. And I am really unhappy about it. For two reasons.

First, it’s not my own content — it’s a screen-shot of Table 1 from Edwards and Roy (2017):

c49rdmlweaaa4if

And second, it’s so darned depressing.

The problem is a well-known one, and indeed one we have discussed here before: as soon as you try to measure how well people are doing, they will switch to optimising for whatever you’re measuring, rather than putting their best efforts into actually doing good work.

In fact, this phenomenon is so very well known and understood that it’s been given at least three different names by different people:

  • Goodhart’s Law is most succinct: “When a measure becomes a target, it ceases to be a good measure.”
  • Campbell’s Law is the most explicit: “The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.”
  • The Cobra Effect refers to the way that measures taken to improve a situation can directly make it worse.

As I say, this is well known. There’s even a term for it in social theory: reflexivity. And yet we persist in doing idiot things that can only possibly have this result:

  • Assessing school-teachers on the improvement their kids show in tests between the start and end of the year (which obviously results in their doing all they can depress the start-of-year tests).
  • Assessing researchers by the number of their papers (which can only result in slicing into minimal publishable units).
  • Assessing them — heaven help us — on the impact factors of the journals their papers appear in (which feeds the brand-name fetish that is crippling scholarly communication).
  • Assessing researchers on whether their experiments are “successful”, i.e. whether they find statistically significant results (which inevitably results in p-hacking and HARKing).

What’s the solution, then?

I’ve been reading the excellent blog of economist Tim Harford, for a while. That arose from reading his even more excellent book The Undercover Economist (Harford 2007), which gave me a crash-course in the basics of how economies work, how markets help, how they can go wrong, and much more. I really can’t say enough good things about this book: it’s one of those that I feel everyone should read, because the issues are so important and pervasive, and Harford’s explanations are so clear.

In a recent post, Why central bankers shouldn’t have skin in the game, he makes this point:

The basic principle for any incentive scheme is this: can you measure everything that matters? If you can’t, then high-powered financial incentives will simply produce short-sightedness, narrow-mindedness or outright fraud. If a job is complex, multifaceted and involves subtle trade-offs, the best approach is to hire good people, pay them the going rate and tell them to do the job to the best of their ability.

I think that last part is pretty much how academia used to be run a few decades ago. Now I don’t want to get all misty-eyed and rose-tinted and nostalgic — especially since I wasn’t even involved in academia back then, and don’t know from experience what it was like. But could it be … could it possibly be … that the best way to get good research and publications out of scholars is to hire good people, pay them the going rate and tell them to do the job to the best of their ability?

[Read on to Why do we manage academia so badly?]

References

Bonus

Here is a nicely formatted full-page version of the Edwards and Roy table, for you to print out and stick on all the walls of your university. My thanks to David Roberts for preparing it.

Back in February last year, I had the privilege of giving one of the talks in the University of Manchester’s PGCert course “Open Knowledge in Higher Education“. I took the subject “Should science always be open?”

My plan was to give an extended version of a talk I’d given previously at ESOF 2014. But the sessions before mine raised all sorts of issues about copyright, and its effect on scholarly communication and the progress of science, and so I found myself veering off piste. The first eight and a half minutes are as planned; from there, I go off on an extended tangent. Well. See what you think.

The money quote (starting at 12m10s): “What is copyright? It’s a machine for preventing the creation of wealth.”

It’s open access week! As part of their involvement with OA Week, Jisc interviewed me. You can read the interview here. A brief taster:

What’s holding back infrastructure development?

“The real problem, of course, as always, is not the technical one, it’s the social one. How do you persuade people to turn away from the brands that they’ve become comfortable with?

We really are only talking about brands, the value of publishing in, say, a big name journal rather than publishing in a preprint repository. It is nothing to do with the value of the research that gets published. It’s like buying a pair of jeans that are ten times as expensive as the exact same pair of jeans in Marks and Spencer because you want to get the ones that have an expensive label. Now ask why we’re so stupid that we care about the labels.”

Read the full interview here.

armani

As explained in careful detail over at Stupid Patent of the Month, Elsevier has applied for, and been granted, a patent for online peer-review. The special sauce that persuaded the US Patent Office that this is a new invention is cascading peer review — an idea so obvious and so well-established that even The Scholarly Kitchen was writing about it as a commonplace in 2010.

Apparently this is from the actual patent. I can't verify that at the moment, as the site hosting it seems to be down.

Apparently this is from the actual patent. I can’t verify that at the moment, as the site hosting it seems to be down.

Well. What can this mean?

A cynic might think that this is the first step an untrustworthy company would take preparatory to filing a lot of time-wasting and resource-sapping nuisance lawsuits on its smaller, faster-moving competitors. They certainly have previous in the courts: remember that they have brought legal action their own customers as well as threatening Academia.edu and of course trying to take Sci-Hub down.

Elsevier representatives are talking this down: Tom Reller has tweeted “There is no need for concern regarding the patent. It’s simply meant to protect our own proprietary waterfall system from being copied” — which would be fine, had their proprietary waterfall system not been itself copied from the ample prior art. Similarly, Alicia Wise has said on a public mailing list “People appear to be suggesting that we patented online peer review in an attempt to own it.  No, we just patented our own novel systems.” Well. Let’s hope.

But Cathy Wojewodzki, on the same list, asked the key question:

I guess our real question is Why did you patent this? What is it you hope to market or control?

We await a meaningful answer.