Check your calculations. Submit your data. Replicate.
April 20, 2013
It’s well worth reading this story about Thomas Herndon, a graduate student who as part of his training set out to replicate a well-known study in his field.
The work he chose, Growth in a Time of Debt by Reinhart and Rogoff, claims to show that “median growth rates for countries with public debt over roughly 90 percent of GDP are about one percent lower than otherwise; average (mean) growth rates are several percent lower.” It has been influential in guiding the economic policy of several countries, reaffirming an austerity-based approach.
So here is Lesson zero, for policy makers: correllation is not causation.
To skip ahead to the punchline, it turned out that Reinhart and Rogoff made a trivial but important mechanical mistake in their working: they meant to average values from 19 rows of their spreadsheet, but got the formula wrong and missed out the last five. Those five included three countries which had experienced high growth while deep in debt, and which if included would have undermined the conclusions.
Therefore, Lesson one, for researchers: check your calculations. (Note to myself and Matt: when we revise the recently submitted Taylor and Wedel paper, we should be careful to check the SUM() and AVG() ranges in our own spreadsheet!)
Herndon was able to discover this mistake only because he repeatedly hassled the authors of the original study for the underlying data. He was ignored several times, but eventually one of the authors did send the spreadsheet. Which is just as well. But of course he should never have had to go chasing the authors for the spreadsheet because it should have been published alongside the paper.
Lesson two, for researchers: submit your data alongside the paper that uses it. (Note to myself and Matt: when we submit the revisions of that paper, submit the spreadsheets as supplementary files.)
Meanwhile, governments around the world were allowing policy to be influenced by the original paper without checking it — policies that affect the disposition of billions of pounds. Yet the paper only got its post-publication review because of an post-grad student’s exercise. That’s insane. It should be standard practice to have someone spend a day or two analysing a paper in detail before letting it have such a profound effect.
And so Lesson three, for policy makers: replicate studies before trusting them.
Ironically, this may be a case where the peer-review system inadvertently did actual harm. It seems that policy makers may have shared the widespread superstition that peer-reviewed publications are “authoritative”, or “quality stamped”, or “trustworthy”. That would certainly explain their allowing it to affect multi-billion-pound policies without further validation. [UPDATE: the paper wasn’t peer-reviewed after all! See the comment below.]
Of course, anyone who’s actually been through peer-review a few times knows how hit-and-miss the process is. Only someone who’s never experienced it directly could retain blind faith in it. (In this respect, it’s a lot like cladistics.)
If a paper has successfully made it through peer-review, we should afford it a bit more respect than one that hasn’t. But that should never translate to blind trust.
In fact, let’s promote that to Lesson four: don’t blindly trust studies just because they’re peer-reviewed.