Why do we manage academia so badly?

March 22, 2017

The previous post (Every attempt to manage academia makes it worse) has been a surprise hit, and is now by far the most-read post in this blog’s nearly-ten-year history. It evidently struck a chord with a lot of people, and I’ve been surprised — amazed, really — at how nearly unanimously people have agreed with it, both in the comments here and on Twitter.

But I was brought up short by this tweet from Thomas Koenig:

That is the question, isn’t it? Why do we keep doing this?

I don’t know enough about the history of academia to discuss the specific route we took to the place we now find ourselves in. (If others do, I’d be fascinated to hear.) But I think we can fruitfully speculate on the underlying problem.

Let’s start with the famous true story of the Hanoi rat epidemic of 1902. In a town overrun by rats, the authorities tried to reduce the population by offering a bounty on rat tails. Enterprising members of the populace responded by catching live rats, cutting off their tails to collect the bounty, then releasing the rats to breed, so more tails would be available in future. Some people even took to breeding rats for their tails.

Why did this go wrong? For one very simple reason: because the measure optimised for was not the one that mattered. What the authorities wanted to do was reduce the number of rats in Hanoi. For reasons that we will come to shortly, the proxy that they provided an incentive for was the number of rat tails collected. These are not the same thing — optimising for the latter did not help the former.

The badness of the proxy measure applies in two ways.

First, consider those who caught rats, cut their tails off and released them. They stand as counter-examples to the assumption that harvesting a rat-tail is equivalent to killing the rat. The proxy was bad because it assumed a false equivalence. It was possible to satisfy the proxy without advancing the actual goal.

Second, consider those who bred rats for their tails. They stand as counter-examples to the assumption that killing a rat is equivalent to decreasing the total number of live rats. Worse, if the breeders released their de-tailed captive-bred progeny into the city, their harvests of tails not only didn’t represent any decrease in the feral population, they represented an increase. So the proxy was worse than neutral because satisfying it could actively harm the actual goal.

So far, so analogous to the perverse academic incentives we looked at last time. Where this gets really interesting is when we consider why the Hanoi authorities chose such a terribly counter-productive proxy for their real goal. Recall their object was to reduce the feral rat population. There were two problems with that goal.

First, the feral rat population is hard to measure. It’s so much easier to measure the number of tails people hand in. A metric is seductive if it’s easy to measure. In the same way, it’s appealing to look for your dropped car-keys under the street-lamp, where the light is good, rather than over in the darkness where you dropped them. But it’s equally futile.

Second — and this is crucial — it’s hard to properly reward people for reducing the feral rat population because you can’t tell who has done what. If an upstanding citizen leaves poison in the sewers and kills a thousand rats, there’s no way to know what he has achieved, and to reward him for it. The rat-tail proxy is appealing because it’s easy to reward.

The application of all this to academia is pretty obvious.

First the things we really care about are hard to measure. The reason we do science — or, at least, the reason societies fund science — is to achieve breakthroughs that benefit society. That means important new insights, findings that enable new technology, ways of creating new medicines, and so on. But all these things take time to happen. It’s difficult to look at what a lab is doing now and say “Yes, this will yield valuable results in twenty years”. Yet that may be what is required: trying to evaluate it using a proxy of how many papers it gets into high-IF journals this year will most certainly mitigate against its doing careful work with long-term goals.

Second we have no good way to reward the right individuals or labs. What we as a society care about is the advance of science as a whole. We want to reward the people and groups whose work contributes to the global project of science — but those are not necessarily the people who have found ways to shine under the present system of rewards: publishing lots of papers, shooting for the high-IF journals, skimping on sample-sizes to get spectacular results, searching through big data-sets for whatever correlations they can find, and so on.

In fact, when a scientist who is optimising for what gets rewarded slices up a study into multiple small papers, each with a single sensational result, and shops them around Science and Nature, all they are really doing is breeding rats.

If we want people to stop behaving this way, we need to stop rewarding them for it. (Side-effect: when people are rewarded for bad behaviour, people who behave well get penalised, lose heart, and leave the field. They lose out, and so does society.)

Q. “Well, that’s great, Mike. What do you suggest?”

A. Ah, ha ha, I’d been hoping you wouldn’t bring that up.

No-will be surprised to hear that I don’t have a silver bullet. But I think the place to start is by being very aware of the pitfalls of the kinds of metrics that managers (including us, when wearing certain hats) like to use. Managers want metrics that are easy to calculate, easy to understand, and quick to yield a value. That’s why articles are judged by the impact factor of the journal they appear in: the calculation of the article’s worth is easy (copy the journal’s IF out of Wikipedia); it’s easy to understand (or, at least, it’s easy for people to think they understand what an IF is); and best of all, it’s available immediately. No need for any of that tedious waiting around five years to see how often the article is cited, or waiting ten years to see what impact it has on the development of the field.

Wise managers (and again, that means us when wearing certain hats) will face up to the unwelcome fact that metrics with these desirable properties are almost always worse than useless. Coming up with better metrics, if we’re determined to use metrics at all, is real work and will require an enormous educational effort.

One thing we can usefully do, whenever considering a proposed metric, is actively consider how it can and will be hacked. Black-hat it. Invest a day imagining you are a rational, selfish researcher in a regimen that uses the metric, and plan how you’re going to exploit it to give yourself the best possible score. Now consider whether the course of action you mapped out is one that will benefit the field and society. If not, dump the metric and start again.

Q. “Are you saying we should get rid of metrics completely?”

A. Not yet; but I’m open to the possibility.

Given metrics’ terrible track-record of hackability, I think we’re now at the stage where the null hypothesis should be that any metric will make things worse. There may well be exceptions, but the burden of proof should be on those who want to use them: they must show that they will help, not just assume that they will.

And what if we find that every metric makes things worse? Then the only rational thing to do would be not to use any metrics at all. Some managers will hate this, because their jobs depend on putting numbers into boxes and adding them up. But we’re talking about the progress of research to benefit society, here.

We have to go where the evidence leads. Dammit, Jim, we’re scientists.

27 Responses to “Why do we manage academia so badly?”

  1. alexholcombe Says:

    Impact Factor wasn’t intended to be used to evaluate people. No one told managers to start using it, they just started using it in the absence of anything better that could be feasibly deployed. This suggests that the problem is managerialism and its concomitants like the way grants are awarded. These trump any metric-designer’s suggestions regarding the metric’s use or even policy announcements like “thou shalt not use impact factor”; this may not cause people do not materially change their behavior: they shift to using their impression of journal prestige, which may come from impact factor.

    If non-perverse metrics cannot be invented, perhaps the only solution is to eliminate the need for an individual (manager; grant referee) to evaluate dozens of people. In the case of grants, may have to run a lottery where only the winners are looked at closely.

    In the case of universities, I certainly do not suggest that the number of managers be massively ramped up to “improve” the manager:researcher ratio. So, we must reduce managerialism.

  2. dale Says:

    Interesting. Metrics push people to do sloppy work. Without Metrics, some people tend to slide through and accomplish only what is necessary to pick up the paycheque every month. What we’re really asking is … Why are you in science ? It’s no different than politics. I think, realistically, we have to take a big step back and realize that everybody did not come out of the same mould. Some people work slow but create breakthroughs. Some work slow and create little. Some work fast and create breakthroughs. Some work fast and end up producing sloppy work. The only way around this is to set up an evaluation of each employee outlining their strengths AND weaknesses. Award them when they bring those weaknesses into line. Phil Currie used to do this. It was an excellent idea except, in gov’t, there was no incentive when employees did, unless you were willing to risk conflict of interest guidelines “under the table rewards”. But this is always hazardous. Some employees in his crew lost all incentive and fell way below standards … on purpose. So, in gov’t, no such luck. He did say however, that you got out what you put in. This made sense. Train, learn as much as possible, then leave when you are on top of your game. The incentives reside out there in the world of private enterprise. This is unfortunate for gov’t institutes. I would say … evaluate but leave incentives and rewards up to the employee. The institute ONLY provides a work place to pursue one’s career. The best will leave. That’s human Nature. It’s a waste of time to try to “modify” it. My two cents.

  3. Andy Says:

    What you are describing goes by the name of “Campbells law”(among others.0 There was an interesting discussion of the problem in a different field completely here
    http://stumblingandmumbling.typepad.com/stumbling_and_mumbling/2017/03/incentivizing-politicians.html#comments

  4. StupendousMan Says:

    (Lost earlier comment – apologies if this is a duplicate)

    Two thoughts:

    a) this discussion, and many others I’ve heard over the years, involves the voices of ordinary faculty, post-docs and grad students. But those who make decisions about hiring, and the criteria used in the hiring process, are department heads, deans, provosts and presidents: upper-level administrators. Our discussions don’t affect their decisions — so talking is largely pointless.

    If one wants to make a difference, one should stop doing science, concentrate on advancing through management levels for a decade, reach a position of dean (for example), and THEN take action. I choose not to stop doing science — I don’t want to be a manager — so I won’t make a difference.

    b) if one chooses a metric such as “number of publications” or “grant money awarded”, one can make a claim to be objective and free of some types of bias (at least to first order). You have argued that metrics are fatally flawed. But if one chooses a subjective method of judging faculty, such as “watch and see what happens for 5 or 10 years”, then one will be accused of some type(s) of bias, based on age, sex, religion, race, or other factor. It is impossible to refute such claims. The result will be that some members of the academic community will be unhappy, no matter what.

    My conclusion is that there’s nothing I can do about this issue, so I will simply go back to the lab and ignore it.

    Depressing, isn’t it?

  5. Sue Says:

    Isn’t the ultimate issue that such metrics are demanded by universities’ paymasters? Individual universities cannot opt out of certain metrics (eg NSS, PTES, REF etc) because to do so would severely threaten their traditional funding streams (government; student recruitment via league table rankings, etc), and they have balanced their books (and in many cases rapid expansion) on the back of such funding. Thus unless a university finds new funding streams without those demands, they are stuck in a deadlock. Redundancies can reverse some over-expansion, but not solve the demand issue. Metrics are (in these circumstances) mandatory, and so people must be employed to administer them.

    Furthermore, surveys such as the NSS and PTES are presented as methods to give “your” institution feedback (why else would students complete them). But this (cleverly?) gives the impression that universities themselves have, completely of their own free will, adopted these tools. I have many colleagues who are extremely angry at the implementation of metrics – whether in principle, due to the impact on their workload, or because the data they provide isn’t even very useful – and let their institutions know so in no uncertain terms. But this ire is misdirected. It merely diffuses the direct action that could, otherwise, be targeted collectively at the decisonmakers driving the demand for metrics.

  6. dkernohan Says:

    I’ve had this conversation with hundreds of researchers and support staff in universities all round the country, it is hugely depressing. But senior managers also know that these metrics are useless, as do most funders, most policymakers and even most publishers – again, this is based on people that I have spoken to. Everybody seems to want good science to be well supported and stable, everybody seems to know that current (and potentially all) metrics are not the way to bring this about.

    We all seem to be assuming “someone else” cares about this stuff and working to please this person. But who is it? And how did they become the most powerful person in UK HE?

    Or are we just looking at the performance of accountability?

  7. Sue Says:

    My feeling (open to debate) is that neither universities nor government really believe in these metrics, but government wants something to report to the public in the interest of ‘transparency’ (accountability / backside-covering). As it ever was.

    By extension, one reason for the arts, humanities and some social sciences in particular being under the cosh is that they can’t deliver tidily packaged ROI. This means investment is ‘risky’ from the point of view of a funder open to public or political criticism.

    For example, private funders of research such as Wellcome and Leverhulme have much less public accountability and so are freed up to ‘gamble’ on basic / fundamental / blue sky / risky research that may not produce tangible outcomes.

    From this perspective, movements to bring research and HE ‘closer’ to its publics should be a good thing, because the more they are engaged, the less justification (fewer metrics) potentially required from middle men. In addition, it may prove that the government is being extremely patronising about the public’s ability to appreciate the intangible value of HE and research.

    However, public engagement obviously brings its own workload and (ideally) support staff…

  8. @mrgunn Says:

    You may be interested to know that “Gaming metrics” was always a hot topic at the NISO altmetrics meetings. We eventually stopped worrying about whether something could be gamed (it can) & started thinking about how to make it expensive to do so. One of the best strategies we came up with was the idea of using correlations between metrics as a way to identify outliers in any one metric. For example, if you know that there’s usually a ratio of (for example) 100 pageviews:10 PDF downloads: 3 Mendeley Readers, then if any one metric for a given paper is way out of line, that is a red flag to investigate further. In line with this, SciVal recommends using a set of metrics as opposed to just one. This also prevents the “one number to rule them all” phenomenon, because any metrics consumer can adapt their set to meet their own needs. Spend a little time here http://www.niso.org/topics/tl/altmetrics_initiative/ & you’ll see that lots of people spent lots of time really thinking this through. How would you exploit a system where you didn’t just have to game one metric, but multiple correlated ones in just the right ratio, some of which you don’t have any knowledge of?

  9. Noam Says:

    Agreed. But a question to ask is: why now? Why weren’t universities as managerialist 40 years ago? This discussion, I believe, will concentrate on social forces and ideas. This management / academic staff split is I believe related to the growth in inequality, the reduced appeal of socialist ideas as the cold war came to an end, …

  10. Mike Taylor Says:

    Noam, that is a key question. I take it to be a consequence of the establishment of economic neoliberalism as an article of faith in many Western governments. That is, the faith-based belief that unregulated markets always yield the best results.

  11. paleoaerie Says:

    I personally prefer the idea of eliminating metrics as a comparative tool. I have heard several people advocate a particular system which I think may work well. Instead of having scientists compete for grants from the major granting institutions, simply split the money equally among all those requesting money. Require that each person give a specified portion to another scientist (there are variations with the requirements for who can be given that money). This encourages collaboration between scientists who could pool their money and it puts the scientists themselves in charge of determining who they think is doing a good job. The details on how this system would be set up would need to be hammered out, but something along this idea I think would serve better than the current system.Of course, the perceived socialist aspects of this system would make it a hard sell in the US.

  12. paleoaerie Says:

    And yes, I realize the proposed system would seriously screw with the penchant of most universities to skim money from all the grants and the requirement that researchers get a portion of their salaries from grants. That is a complication that would need to be worked out.

  13. Matt Wedel Says:

    I realize the proposed system would seriously screw with the penchant of most universities to skim money from all the grants and the requirement that researchers get a portion of their salaries from grants. That is a complication that would need to be worked out.

    Or eliminated. The idea that researchers compete for grants not only to pursue their own research, but also to prop up their universities financially, is a fairly recent development that is probably unsustainable over the long run, and also possibly at the root of the problems we’re discussing here. In effect, NSF and NIH have become a pipeline for federal dollars to universities directly, with researchers – and indeed research – basically being Sherpas.

    Thought experiment: how many of the perverse incentives in science would disappear if grants could not include any institutional overhead? If researchers only had to apply for the money that they themselves needed, their grants would be smaller, and if the universities weren’t getting a cut of the money, they’d have no reason to incentivize researchers to get big sexy grants. They might start focusing on measures of actual quality, like documentation and reproducibility.

    Of course, institutional overhead exists because big science centers have big operating budgets. That’s a fact that can’t be dodged, and that money will have to come from somewhere. The question is whether it’s better to continue meeting those operating budgets through the mechanism of tacking on vast amounts of institutional overhead to research grants, or – just maybe – to sever that link and let research grants just be for research.

  14. David Marjanović Says:

    On that note, let me point out that I’ve only recently learned that many university professors in the US are paid their salaries purely for teaching, not for research, so only for 10 months per year; if they want to do research or can’t stretch their salaries to pay for 2 months of costs of living, they’re forced to apply for grants.

    Don’t people there notice how perverse this is? The very point of a university is that the teaching is done by people who actually work in the fields they’re teaching, who actually contribute to the research they teach about. Research is an inseparable part of a professor’s job.

  15. Amnon Harel Says:

    Murphy’s golden rule: Whoever has the good makes the rules.

    The bureaucrats have the gold and get to decide which academics to give it to. Most universities are financed by governments, and Sue already wrote about some key motivations of their bureaucrats. The ultra-rich universities (Harvard and such) are an interesting exception, but may be too abnormal to teach us much about the rest of academia.

  16. william McInnes Says:

    If I said it once, I’ll say it again. Money and Power are the keys. We never discuss how to acquire either because we are too involved in just academia … linear thinking, tucked away in our self imposed little boxes, continually acting out the role of the victim. As such, nothing will change and hence will bring about yet another wave of indignation and victimization. It’s called “Entitlement”. Don’t you get it yet ???? We need to control the 2 keys. It’s actually extending academia outside the box wherein we plot out our own destiny …. not to be chained to the destiny of non-academics [which is where we are now].

  17. Torben Mogensen Says:

    I agree with paleoaerie that even distribution is best, and with David that teachers should do research. The converse is also true: publicly funded researchers should also teach. So the ideal solution is to pay teachers a full, decent salary, but give them sufficient time for pursuits other than teaching and administration — 50% seems reasonable.

    Allow them each a small amount of money (say, 10%-20% of their salary) for travel expenses and equipment. If a researcher needs very expensive equipment (such as a particle accelerator), she would need to pool her resources with sufficiently many other researchers to make this happen. Crowd-funding sites could be set up to help such projects. In addition to researchers using their limited funds, private companies and individuals could chip in using these sites.

    The researchers should be controlled only to the extent that the research money they spend go to scientific pursuits. They should _not_ be able to buy themselves free from teaching obligations. If they don’t want to teach, they should work in a privately funded lab instead of at a university. If a researcher can not find use for all of their equipment/travel money, she can always use it to support equipment for other researchers by using the crowd-funding sites.

  18. Mike Taylor Says:

    I like the principle of your approach, Torben, but I don’t think it takes into account enough the differences in research costs across fields. Some physicists need particle accelerators, as you mention. Many bench scientists need equipment and reagents. As a palaeontologist, all I need is travel funds to visit collections and (potentially) dig sites. Some humanities scholars don’t even need that.

  19. Torben Mogensen Says:

    I agree that some purely theoretical fields (including math, philosophy, and linguistics) don’t need a lot of equipment, but finding a measure to determine who needs more just brings the problem back again. I believe the crowd-funding idea will help this: If a researcher can not find good things to use her research money for, she can support those who can, but do not have sufficient funds to do it on their own.

    You say that, as a palaeontologist you don’t need a lot. That may be true if you only need funds to visit collections, but if you need to do excavations, the cost quickly spirals. Many laboratory scientists can do with much less: Their laboratories are typically shared between many researchers (and they can even to a large extent use teaching laboratories for research), and really expensive equipment such as accelerators and telescopes are used internationally by so many researchers that their combined funding should suffice.

  20. Mike Taylor Says:

    Running the LC costs a billion dollars a year. Good luck crowdfunding that.

    There’s also a pervasive problem with crowdfunding: it inherently attracts money to sexy sciences, which are not the same thing as important sciences. I could probably get together a $1000 travel-fund kickstarter to examine some sauropod bones and write them up; but someone who needs to visit an equally important collection of Eocene clam fossils will not get get the money.

  21. Torben Mogensen Says:

    I agree that the system is not without problems, but no system will ever be. The current system is certainly worse. And you should not be so pessimistic about the interest in Eocene clam fossils.

  22. Mike Taylor Says:

    Oh, I think I should. I, for example, find Eocene clams quite astoundingly dull, quite intolerably tedious.

  23. William Miller Says:

    >>Agreed. But a question to ask is: why now? Why weren’t universities as managerialist 40 years ago?

    Society in general was less bureaucratic (certainly in the US). IMO this is far broader than universities.

    I think at least the greater focus on metrics is because of a greater drive for accountability / ability to make decisions in a demonstrably unbiased way. A more litigious society drives more emphasis on objective metrics & paper trail to protect an organization from being sued.

    Universities interact heavily w/ government, so a more bureaucratic government is probably involved too.

  24. David Marjanović Says:

    I agree on Eocene clams. Research funding should not depend on what’s fashionable.

    Not all branches of linguistics are purely theoretical! There’s even one called field linguistics: you go somewhere, learn the language and write a dictionary, a grammar ( = how the language works) and a corpus of texts (usually traditional stories).

    I agree with paleoaerie that even distribution is best, and with David that teachers should do research. The converse is also true: publicly funded researchers should also teach.

    I don’t agree with that. Simply, not every good researcher is a good teacher. There should be publicly funded research-only institutions like the CNRS in France.

    Conversely, not every good teacher is a good researcher. Universities are specifically for people who are good at being both.


  25. […] And there’s an interesting follow-up article that tries to get to the root of the problem: Why do we manage academia so […]

  26. Michael Says:

    Everyone reading this has probably seen the higher ed inflation charts that start in the 1970’s and show that the cost of higher ed has increased faster than the rate of inflation for several decades now.

    In the 1980’s alarm grew and politicians (and many of the general public) noted that university administrators were almost all former faculty. They decided that the problem of excessive inflation was due to “fuzzy headed” academic types who were incapable of good management practices. The “obvious” solution was to bring in the “tough minded” and “well trained” MBA types to manage the universities using sound business practices.

    The switch to “objective” metrics by managerial types was an “obvious” solution–if you can count widgets you can just as easily count publications or student numbers.

    Society at large found the logic compelling and so professional management was brought in to streamline and rationalize university budgets. Unfortunately the only real change has been a massive increase in administrative staffing and structure.

    Around the same time you see a growing attention to college rankings. Of course the rankings were (and continue to be) based on easily available metrics that have little (often nothing) to do with the quality of education that a student receives.

    The inevitable result is what we are discussing.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: