Back in 2012, in response to the Cost Of Knowledge declaration, Elsevier made all articles in “primary math journals” free to read, distribute and adapt after a four-year rolling window. Today, as David Roberts points out, it seems they have silently withdrawn some of those rights. In particular, the “free” articles can no longer be redistributed or adapted — which, for example, prevents their use in teaching or in Wikipedia articles.

We don’t know when this changed. It just did, quietly, at some point after the Cost of Knowledge anger had died down, when no-one was watching them carefully. So here, once more, Elsevier prove that they are bad actors who simply cannot be trusted.

There is a broader and more important point here: we simply can’t build a meaningfully open scholarly infrastructure that is dependent on the whims of corporations. It can’t be done.

Whatever corporations like Elsevier give us one day, they can and will take away another day. They can’t help themselves. It’s in their nature. And, really, it’s unreasonable of us to expect anything different from a corporation whose reason for existing is to enrich its shareholders.

So to have a genuinely open scholarly infrastructure, there is no real alternative to building it ourselves, within the scholarly community. It’s worse that useless to sit around waiting for likes of Elsevier to gift us the infrastructure we need. It’s not in their interests.

So once more, folks: there’s no need for us to be hostile to Elsevier et al. Just walk away. Do not deal with them. They are not on your side. They never have been, and they never will be. They will give just enough ground to defuse anger when it threatens their bottom line; that’s all. Then they will take the ground back when it suits them.


Note. This post is based on a series of tweets.

The previous post (Every attempt to manage academia makes it worse) has been a surprise hit, and is now by far the most-read post in this blog’s nearly-ten-year history. It evidently struck a chord with a lot of people, and I’ve been surprised — amazed, really — at how nearly unanimously people have agreed with it, both in the comments here and on Twitter.

But I was brought up short by this tweet from Thomas Koenig:

That is the question, isn’t it? Why do we keep doing this?

I don’t know enough about the history of academia to discuss the specific route we took to the place we now find ourselves in. (If others do, I’d be fascinated to hear.) But I think we can fruitfully speculate on the underlying problem.

Let’s start with the famous true story of the Hanoi rat epidemic of 1902. In a town overrun by rats, the authorities tried to reduce the population by offering a bounty on rat tails. Enterprising members of the populace responded by catching live rats, cutting off their tails to collect the bounty, then releasing the rats to breed, so more tails would be available in future. Some people even took to breeding rats for their tails.

Why did this go wrong? For one very simple reason: because the measure optimised for was not the one that mattered. What the authorities wanted to do was reduce the number of rats in Hanoi. For reasons that we will come to shortly, the proxy that they provided an incentive for was the number of rat tails collected. These are not the same thing — optimising for the latter did not help the former.

The badness of the proxy measure applies in two ways.

First, consider those who caught rats, cut their tails off and released them. They stand as counter-examples to the assumption that harvesting a rat-tail is equivalent to killing the rat. The proxy was bad because it assumed a false equivalence. It was possible to satisfy the proxy without advancing the actual goal.

Second, consider those who bred rats for their tails. They stand as counter-examples to the assumption that killing a rat is equivalent to decreasing the total number of live rats. Worse, if the breeders released their de-tailed captive-bred progeny into the city, their harvests of tails not only didn’t represent any decrease in the feral population, they represented an increase. So the proxy was worse than neutral because satisfying it could actively harm the actual goal.

So far, so analogous to the perverse academic incentives we looked at last time. Where this gets really interesting is when we consider why the Hanoi authorities chose such a terribly counter-productive proxy for their real goal. Recall their object was to reduce the feral rat population. There were two problems with that goal.

First, the feral rat population is hard to measure. It’s so much easier to measure the number of tails people hand in. A metric is seductive if it’s easy to measure. In the same way, it’s appealing to look for your dropped car-keys under the street-lamp, where the light is good, rather than over in the darkness where you dropped them. But it’s equally futile.

Second — and this is crucial — it’s hard to properly reward people for reducing the feral rat population because you can’t tell who has done what. If an upstanding citizen leaves poison in the sewers and kills a thousand rats, there’s no way to know what he has achieved, and to reward him for it. The rat-tail proxy is appealing because it’s easy to reward.

The application of all this to academia is pretty obvious.

First the things we really care about are hard to measure. The reason we do science — or, at least, the reason societies fund science — is to achieve breakthroughs that benefit society. That means important new insights, findings that enable new technology, ways of creating new medicines, and so on. But all these things take time to happen. It’s difficult to look at what a lab is doing now and say “Yes, this will yield valuable results in twenty years”. Yet that may be what is required: trying to evaluate it using a proxy of how many papers it gets into high-IF journals this year will most certainly mitigate against its doing careful work with long-term goals.

Second we have no good way to reward the right individuals or labs. What we as a society care about is the advance of science as a whole. We want to reward the people and groups whose work contributes to the global project of science — but those are not necessarily the people who have found ways to shine under the present system of rewards: publishing lots of papers, shooting for the high-IF journals, skimping on sample-sizes to get spectacular results, searching through big data-sets for whatever correlations they can find, and so on.

In fact, when a scientist who is optimising for what gets rewarded slices up a study into multiple small papers, each with a single sensational result, and shops them around Science and Nature, all they are really doing is breeding rats.

If we want people to stop behaving this way, we need to stop rewarding them for it. (Side-effect: when people are rewarded for bad behaviour, people who behave well get penalised, lose heart, and leave the field. They lose out, and so does society.)

Q. “Well, that’s great, Mike. What do you suggest?”

A. Ah, ha ha, I’d been hoping you wouldn’t bring that up.

No-will be surprised to hear that I don’t have a silver bullet. But I think the place to start is by being very aware of the pitfalls of the kinds of metrics that managers (including us, when wearing certain hats) like to use. Managers want metrics that are easy to calculate, easy to understand, and quick to yield a value. That’s why articles are judged by the impact factor of the journal they appear in: the calculation of the article’s worth is easy (copy the journal’s IF out of Wikipedia); it’s easy to understand (or, at least, it’s easy for people to think they understand what an IF is); and best of all, it’s available immediately. No need for any of that tedious waiting around five years to see how often the article is cited, or waiting ten years to see what impact it has on the development of the field.

Wise managers (and again, that means us when wearing certain hats) will face up to the unwelcome fact that metrics with these desirable properties are almost always worse than useless. Coming up with better metrics, if we’re determined to use metrics at all, is real work and will require an enormous educational effort.

One thing we can usefully do, whenever considering a proposed metric, is actively consider how it can and will be hacked. Black-hat it. Invest a day imagining you are a rational, selfish researcher in a regimen that uses the metric, and plan how you’re going to exploit it to give yourself the best possible score. Now consider whether the course of action you mapped out is one that will benefit the field and society. If not, dump the metric and start again.

Q. “Are you saying we should get rid of metrics completely?”

A. Not yet; but I’m open to the possibility.

Given metrics’ terrible track-record of hackability, I think we’re now at the stage where the null hypothesis should be that any metric will make things worse. There may well be exceptions, but the burden of proof should be on those who want to use them: they must show that they will help, not just assume that they will.

And what if we find that every metric makes things worse? Then the only rational thing to do would be not to use any metrics at all. Some managers will hate this, because their jobs depend on putting numbers into boxes and adding them up. But we’re talking about the progress of research to benefit society, here.

We have to go where the evidence leads. Dammit, Jim, we’re scientists.

I’ve been on Twitter since April 2011 — nearly six years. A few weeks ago, for the first time, something I tweeted broke the thousand-retweets barrier. And I am really unhappy about it. For two reasons.

First, it’s not my own content — it’s a screen-shot of Table 1 from Edwards and Roy (2017):

c49rdmlweaaa4if

And second, it’s so darned depressing.

The problem is a well-known one, and indeed one we have discussed here before: as soon as you try to measure how well people are doing, they will switch to optimising for whatever you’re measuring, rather than putting their best efforts into actually doing good work.

In fact, this phenomenon is so very well known and understood that it’s been given at least three different names by different people:

  • Goodhart’s Law is most succinct: “When a measure becomes a target, it ceases to be a good measure.”
  • Campbell’s Law is the most explicit: “The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.”
  • The Cobra Effect refers to the way that measures taken to improve a situation can directly make it worse.

As I say, this is well known. There’s even a term for it in social theory: reflexivity. And yet we persist in doing idiot things that can only possibly have this result:

  • Assessing school-teachers on the improvement their kids show in tests between the start and end of the year (which obviously results in their doing all they can depress the start-of-year tests).
  • Assessing researchers by the number of their papers (which can only result in slicing into minimal publishable units).
  • Assessing them — heaven help us — on the impact factors of the journals their papers appear in (which feeds the brand-name fetish that is crippling scholarly communication).
  • Assessing researchers on whether their experiments are “successful”, i.e. whether they find statistically significant results (which inevitably results in p-hacking and HARKing).

What’s the solution, then?

I’ve been reading the excellent blog of economist Tim Harford, for a while. That arose from reading his even more excellent book The Undercover Economist (Harford 2007), which gave me a crash-course in the basics of how economies work, how markets help, how they can go wrong, and much more. I really can’t say enough good things about this book: it’s one of those that I feel everyone should read, because the issues are so important and pervasive, and Harford’s explanations are so clear.

In a recent post, Why central bankers shouldn’t have skin in the game, he makes this point:

The basic principle for any incentive scheme is this: can you measure everything that matters? If you can’t, then high-powered financial incentives will simply produce short-sightedness, narrow-mindedness or outright fraud. If a job is complex, multifaceted and involves subtle trade-offs, the best approach is to hire good people, pay them the going rate and tell them to do the job to the best of their ability.

I think that last part is pretty much how academia used to be run a few decades ago. Now I don’t want to get all misty-eyed and rose-tinted and nostalgic — especially since I wasn’t even involved in academia back then, and don’t know from experience what it was like. But could it be … could it possibly be … that the best way to get good research and publications out of scholars is to hire good people, pay them the going rate and tell them to do the job to the best of their ability?

[Read on to Why do we manage academia so badly?]

References

Bonus

Here is a nicely formatted full-page version of the Edwards and Roy table, for you to print out and stick on all the walls of your university. My thanks to David Roberts for preparing it.

Back in February last year, I had the privilege of giving one of the talks in the University of Manchester’s PGCert course “Open Knowledge in Higher Education“. I took the subject “Should science always be open?”

My plan was to give an extended version of a talk I’d given previously at ESOF 2014. But the sessions before mine raised all sorts of issues about copyright, and its effect on scholarly communication and the progress of science, and so I found myself veering off piste. The first eight and a half minutes are as planned; from there, I go off on an extended tangent. Well. See what you think.

The money quote (starting at 12m10s): “What is copyright? It’s a machine for preventing the creation of wealth.”

The interview that I did for Jisc was conducted via Skype, by the very able Michelle Pauli. We talked for some time, and obviously much of what was said had to be cut for length (and no doubt some repetition).

To my pleasant surprise, though, Michelle prepared a complete transcript of our talk before the cutting started. So in the tradition of DVD movies, I am now able to offer the Deleted Scenes. I hope that some of what follows is of interest.


How would you describe the current state of play of open access in the UK?

I think there are two answers to that. One is it’s enormously encouraging that it’s come so far over the last few years. The other is it’s just terribly discouraging that there’s so very far to go and that so much of the control of how we go about publishing things is still in the hands of organisations that really have no interest — in the full sense — in how science progresses but really are driven primarily by what publishing can do for them commercially.

How about the level of debate in recent times?

The situation is that we have these huge, very well-established publishers that have been running and dominating the game for decades, and commercial consolidations have meant that even the relatively small number of publishers that dominated 20 years ago now are even fewer now — so people like Elsevier and Wiley and Taylor & Francis now control a vast proportion of the overall academic publishing market.

So those organisations obviously exist primarily to make money for their shareholders or their owners. I’m not saying they have no other motivations but that’s their primary motivation and certainly the executives that they hire to run the companies have that goal very much in mind. So it’s not surprising that those companies are desperate to hang onto what is essentially a cash mill for them,where they’re working with content that’s generated by very highly skilled professionals, where they pay no money in exchange for that content, and go onto sell it.

Obviously, they’re desperate to hang on to that market model and, as a result, what we often see from representatives of these publishers is statements that are, I think … charitably you could say are terribly misinformed; a more cynical and perhaps more realistic perspective would be that they’re deliberately misleading and clouding issues, trying to reopen discussions that have long been decided, casting doubt on things that really carry no doubt, forever equivocating and trying to add complexity to what are essentially simple discussions.

So what we end up is with situations where we have groups of people with an interest in open access and scholarly publishing more generally, when you have them gathered together, and we could have been having discussions about how precisely we want to push the whole thing forward; but you always find people from these major publishers as well, always impeding those discussions and throwing up road blocks and ifs and buts and maybes, and slowing things down or bringing them to a complete halt. So that’s what I mean about the quality of the discussion.

Although the quality of the debate within the open access advocacy field is also quite, I can’t think of the word. What word am I looking at here?

It can be disappointingly rancorous at times …

rancor

… and the reason is that we’ve got these two routes towards open access which we call Gold and Green. And each of those have advocates, who at times seem to be not so much open access advocates who favour one of the routes but advocates for one of the routes who are actively opposed to the other routes. And that can be unhelpful. But that said, they are some very difficult discussions to reach a conclusion on and I admit I go backwards and forwards on this myself, which of these two approaches is going to be better in the long term.

People talk about gold open access suffering from the fact that it’s expensive. Of course that’s only true if you ignore the money that they spend on subscriptions while they’re running Green open access. So I think a lot of the arguments that are used in favour of either of those routes against the other can be misleading, and probably to some degree is also tied up with the degree of emotional investment people have in the different approaches.

How are things going to move forward? What’s the best way to work with legacy publishers to keep things moving forward or is that not even possible, and then what happens?

Honestly, my take is that the existence of the legacy publishers is a net negative. If I could wave a magic wand and have those publishers cease to exist overnight, I would do it unhesitatingly. Then we’d have a period of two or three months of chaos and then we’d settle into an equilibrium that I think would be much better than what we have at the moment.

I’m not really interested in working with the legacy publishers at the moment. I have often tried to communicate constructively with people from Elsevier in particular and along the way I’ve written lots of blog posts about how I feel Elsevier could change its behaviour in a way that would make it not just tolerable but actively seen as a friend of progress. I’ve reached a point now where I’ve realized that just isn’t going to happen and I don’t really feel that there’s any … while there are individuals at all of those publishers who would very much like to do the right thing, the organisations they are working for just makes it impossible. Not going to work.

We won’t make good progress by for example persuading Elsevier to slightly loosen their requirements on which things can be published as green open access and how long the embargos are and what license they are available under. I think ultimately where we need to get to is somewhere where we just not beholden to these organisations at all and we’re doing what’s best for scholarship rather than what’s best for Elsevier.

Going back to infrastructure, and the possibilities that are there, does the community have to get to own its own infrastructure otherwise you’d have a situation with Elsevier and SSRN and Mendeley and it being taken over again. How does that work?

[Part of the answer was included in the published interview. Then:] Whatever we build absolutely needs to be wrapped around with these kinds of things, and that’s to do with the software being open for example so that if a bad actor does somehow gain control of the organisation then it can be forked and run elsewhere. It’s to do with having financial firewalls between various parts of the organisation. There’s a whole bunch of stuff that they’ve really thought through in detail.

I wonder if you could put on your futurologist hat now and finally say what do you think is going to happen next in open access?

I couldn’t pick one out. There are several possibilities. One is that we’re starting to see deals coming up now where large organisations are getting offsetting for open access article processing charges with the big publishers. What they’re doing is trying to make a sort of revenue-neutral conversion from the current system to a gold open access one. It may be that that eventually catches on and becomes the way that increasing numbers of organisations and countries make things happen. Would I be happy with that? Yeah, I would. Because although I still think it’s bad that these large corporations should have control over the scientific record, I think if it’s freely available to anyone that’s still a huge step forward from where we are now.

Another possibility we’re seeing is that every now and then we’re starting to get stories about universities just cancelling various subscription contracts — or, more realistically, not renewing them when they expire — and finding other ways to make do. Presumably, the money that they were spending on that, they’re investing in other more open forms of scholarly infrastructure. So the long term future that I suppose I would like to see is an increase, an acceleration in that tendency. Resulting in far more money being diverted from subscriptions and being put into other ways of disseminating scholarly outputs.

Will Brexit have any impact on this area?

Yeah, what a horrible thought. A lot of the really good things that have been happening in open access recently have been in the European Union. So you probably know that Horizon 2020 programme has an enormous amount of funding for open access, and for building open access infrastructure, it is responsible for the OpenAire repository which is joining up the scholarly record of European countries all around the continent. The idea of being isolated from all of that is just one of the many awful consequences of the most short sighted political decision in my lifetime.

So yeah, absolutely it’s a setback. Because everything we want to do in academia completely doesn’t respect national boundaries at all. By its very nature what we do is international and we have international collaborators and we work in other countries. So anything like this, that’s to do with rebuilding the historic borders that used to separate our various countries, is a terrible step back not only for academia but for civilisation.

A few years ago, we started the web-site Who Needs Access? to highlight some of the many ways that people outside academia want and need access to published scholarly works: fossil preparators, small businesses, parents of children with rare diseases, developing-world entrepreneursdisability rights campaigners and many more.

christy-collins

Who Needs Access? is an anecdotal site, because often people will respond more to stories about individuals than to numbers. As has been said, “one death is a tragedy; a million deaths is a statistic”.

But as scientists, we also want to be able to point to evidence for the wider importance of open access outside academia. To that end, I am delighted to announce that we now have a Who Needs Access? Bibliography, kindly contributed by ElHassan ElSabry. (ElHassan is doing his Ph.D on the wider impact of open access, at the National Graduate Institute for Policy Studies in Tokyo. Part of his work will involve analysing and synthesising the articles in this bibliography, so we can expect additional useful contributions from him.)

Check out the bibliography!

It’s open access week! As part of their involvement with OA Week, Jisc interviewed me. You can read the interview here. A brief taster:

What’s holding back infrastructure development?

“The real problem, of course, as always, is not the technical one, it’s the social one. How do you persuade people to turn away from the brands that they’ve become comfortable with?

We really are only talking about brands, the value of publishing in, say, a big name journal rather than publishing in a preprint repository. It is nothing to do with the value of the research that gets published. It’s like buying a pair of jeans that are ten times as expensive as the exact same pair of jeans in Marks and Spencer because you want to get the ones that have an expensive label. Now ask why we’re so stupid that we care about the labels.”

Read the full interview here.

armani