It’s now widely understood among researchers that the impact factor (IF) is a statistically illiterate measure of the quality of a paper. Unfortunately, it’s not yet universally understood among administrators, who in many places continue to judge authors on the impact factors of the journals they publish in. They presumably do this on the assumption that impact factor is a proxy for, or predictor of, citation count, which is turn is assumed to correlate with influence.

As shown by Lozano et al. (2012), the correlation between IF and citations is in fact very weak — r2 is about 0.2 — and has been progressively weakening since the dawn of the Internet era and the consequent decoupling of papers from the physical journal that they appear in. This is a counter-intuitive finding: given that the impact factor is calculated from citation counts you’d expect it to correlate much more strongly. But the enormous skew of citation rates towards a few big winners renders the average used by the IF meaningless.

To bring this home, I plotted my own personal impact-factor/citation-count graph. I used Google Scholar’s citation counts of my articles, which recognises 17 of my papers; then I looked up the impact factors of the venues they appeared in, plotted citation count against impact factor, and calculated a best-fit line through my data-points. Here’s the result (taken from a slide in my Berlin 11 satellite conference talk):

berlin11-satellite-taylor-what-we-can-do--impact-factor-graph

I was delighted to see that the regression slope is actually negative: in my case at least, the higher the impact factor of the venue I publish in, the fewer citations I get.

There are a few things worth unpacking on that graph.

First, note the proud cluster on the left margin: publications in venues with impact factor zero (i.e. no impact factor at all). These include papers in new journals like PeerJ, in perfectly respectable established journals like PaleoBios, edited-volume chapters, papers in conference proceedings, and an arXiv preprint.

My most-cited paper, by some distance, is Head and neck posture in sauropod dinosaurs inferred from extant animals (Taylor et al. 2009, a collaboration between all three SV-POW!sketeers). That appeared in Acta Palaeontologia Polonica, a very well-respected journal in the palaeontology community but which has a modest impact factor of 1.58.

My next most-cited paper, the Brachiosaurus revision (Taylor 2009), is in the Journal of Vertebrate Palaeontology – unquestionably the flagship journal of our discipline, despite its also unspectacular impact factor of 2.21. (For what it’s worth, I seem to recall it was about half that when my paper came out.)

In fact, none of my publications have appeared in venues with an impact factor greater than 2.21, with one trifling exception. That is what Andy Farke, Matt and I ironically refer to as our Nature monograph (Farke et al. 2009). It’s a 250-word letter to the editor on the subject of the Open Dinosaur Project. (It’ a subject that we now find profoundly embarrassing given how dreadfully slowly the project has progressed.)

Google Scholar says that our Nature note has been cited just once. But the truth is even better: that one citation is in fact from an in-prep manuscript that Google has dug up prematurely — one that we ourselves put on Google Docs, as part of the slooow progress of the Open Dinosaur Project. Remove that, and our Nature note has been cited exactly zero times. I am very proud of that record, and will try to preserve it by persuading Andy and Matt to remove the citation from the in-prep paper before we submit. (And please, folks: don’t spoil my record by citing it in your own work!)

What does all this mean? Admittedly, not much. It’s anecdote rather than data, and I’m posting it more because it amuses me than because it’s particularly persuasive. In fact if you remove the anomalous data point that is our Nature monograph, the slope becomes positive — although it’s basically meaningless, given that all my publications cluster in the 0–2.21 range. But then that’s the point: pretty much any data based on impact factors is meaningless.

References

 

Yesterday I was at the Berlin 11 satellite conference for students and early-career researchers. It was a privilege to be part of a stellar line-up of speakers, including the likes of SPARC’s Heather Joseph, PLOS’s Cameron Neylon, and eLIFE’s Mark Patterson. But even more than these, there were two people who impressed me so much that I had to give in to my fannish tendencies and have photos taken with them. Here they are.

MikeTaylor-with-JackAndraka2

This is Jack Andraka, who at the age of fifteen invented a new test for pancreatic cancer that is 168 times faster, 1/26000 as expensive and 400 times more sensitive than the current diagnostic tests, and only takes five minutes to run.  Of course he’s grown up a bit since then — he’s sixteen now.

Right at the moment Jack’s not getting much science done because he’s sprinting from meeting to meeting. He came to us in Berlin literally straight from an audience with the Pope. He’s met Barack Obama in the oval office. And one of the main burdens of his talk is that he’s not such an outlier as he appears: there are lots of other brilliant kids out there who are capable of doing similarly groundbreaking work — if only they could get access to the published papers they need. (Jack was lucky: his parents are indulgent, and spent thousands of dollars on paywalled papers for him.)

Someone on Twitter noted that every single photo of Jack seems to show him, and the people he’s with, in thumbs-up pose. It’s true: and that is his infectious positivity at work. It’s energising as well as inspiring to be around him.

(Read Jack’s guest post at PLOS on Why Science Journal Paywalls Have to Go)

Here’s the other photo:

IMG_20131118_202834

This is Bernard Rentier, who is rector of the University of Liège. To put it bluntly, he is the boss of the whole darned university — an academic of the very senior variety that I never meet; and of the vintage that, to put it kindly, can have a tendency to be rather conservative in approach, and cautious about open access.

With Bernard, not a bit of it. He has instituted a superb open-access policy at Liège — one that is now being taken up a model for the whole of Belgium. Whenever members of the Liège faculty apply for anything — office space, promotions, grants, tenure — their case is evaluated by taking into account only publications that have been deposited in the university’s open-access repository, ORBi.

Needless to say, the compliance rate is superb — essentially 100% since the policy came in. As a result, Liège’s work is more widely used, cited, reused, replicated, rebutted and generally put to work. The world benefits, and the university benefits.

Bernard is a spectacular example of someone in a position of great power using that power for good. Meanwhile, at the other end of scale, Jack is someone who — one would have thought — had no power at all. But in part because of work made available through the influence of people like Bernard, it turned out he had the power to make a medical breakthrough.

I came away from the satellite meeting very excited — in fact, by nearly all the presentations and discussions, but most especially by the range represented by Jack and Bernard. People at both ends of their careers; both of them not only promoting open access, but also doing wonderful things with it.

There’s no case against open access, and there never has been. But shifting the inertia of long-established traditions and protocols requires enormous activation energy. With advocates like Jack and Bernard, we’re generating that energy.

Onward and upward!

It shouldn’t come as a huge surprise to regular readers that PeerJ is Matt’s and my favourite journal. Reasons include its super-fast turnaround, beautiful formatting that doesn’t look like a facsimile of 1980s printed journals, and its responsiveness to authors and readers. But the top reason is undoubtedly its openness: not only are the article open access, but the peer-review process is also (optionally) open, and of course PeerJ preprints are inherently open science.

During open access week, PeerJ now publishes this paper (Farke et al. 2013), describing the most open-access dinosaur in the world.

FarkeEtAl2013-parasaurolophus-fig4

It’s a baby Parasaurolophus, but despite being a stinkin’ ornithopod it’s a fascinating specimen for a lot of reasons. For one thing, it’s the most complete known Parasaurolophus. For another, its young age enables new insights into hadrosaur ontogeny. It’s really nicely preserved, with soft-tissue preservation of both the skin and the beak. The most important aspect of the preservation may be that C-scanning shows the cranial airways clearly:

FarkeEtAl2013-parasaurolophus-fig9

This makes it possible for the new specimen to show us the ontogenetic trajectory of Parasaurolophus – specifically to see how its distinctive tubular crest grew.

FarkeEtAl2013-parasaurolophus-fig11

But none of this goodness is the reason that we at SV-POW! Towers are excited about this paper. The special sauce is the ground-breaking degree of openness in how the specimen is presented. Not only is the paper itself open access (and the 28 beautiful illustrations correspondingly open, and available in high-resolution versions). But best of all, CT scan data, surface models and segmentation data are freely available on FigShare. That’s all the 3d data that the team produced: everything they used in writing the paper is free for us all. We can use it to verify or falsify their conclusions; we can use it to make new mechanical models; we can use it to make replicas of the bones on 3d printers. In short: we can do science on this specimen, to a degree that’s never been possible with any previously published dinosaur.

This is great, and it shows a generosity of spirit from Andy Farke and his co-authors.

But more than that: I think it’s a great career move. Not so long ago, I might have answered the question “should we release our data?” with a snarky answer: “it depends on why you have a science career: to advance science, or to advance your career”. I don’t see it that way any more. By giving away their data, Farke’s team are certainly not precluding using it themselves as the basis for more papers — and if others use it in their work, then Farke et al. will get cited more. Everyone wins.

Open it up, folks. Do work worthy of giants, and then let others stand freely on your shoulders. They won’t weigh you down; if anything, they’ll lift you up.

References

Farke, Andrew A., Derek J. Chok, Annisa Herrero, Brandon Scolieri, and Sarah Werning. 2013. Ontogeny in the tube-crested dinosaur Parasaurolophus (Hadrosauridae) and heterochrony in hadrosaurids. PeerJ 1:e182. http://dx.doi.org/10.7717/peerj.182

Suppose, hypothetically, that you worked for an organisation whose nominal goal is the advancement of science, but which has mutated into a highly profitable subscription-based publisher. And suppose you wanted to construct a study that showed the alternative — open-access publishing — is inferior.

What would you do?

You might decide that a good way to test publishers is by sending them an obviously flawed paper and seeing whether their peer-review weeds it out.

But you wouldn’t want to risk showing up subscription publishers. So the first thing you’d do is decide up front not to send your flawed paper to any subscription journals. You might justify this by saying something like “the turnaround time for traditional journals is usually months and sometimes more than a year. How could I ever pull off a representative sample?“.

Next, you’d need to choose a set of open-access journals to send it to. At this point, you would carefully avoid consulting the membership list of the Open Access Scholarly Publishers Association, since that list has specific criteria and members have to adhere to a code of conduct. You don’t want the good open-access journals — they won’t give you the result you want.

Instead, you would draw your list of publishers from the much broader Directory of Open Access Journals, since that started out as a catalogue rather than a whitelist. (That’s changing, and journals are now being cut from the list faster than they’re being added, but lots of old entries are still in place.)

Then, to help remove many of the publishers that are in the game only to advance research, you’d trim out all the journals that don’t levy an article processing charge.

But the resulting list might still have an inconveniently high proportion of quality journals. So you would bring down the quality by adding in known-bad publishers from Beall’s list of predatory open-access publishers.

Having established your sample, you’d then send the fake papers, wait for the journals’ responses, and gather your results.

To make sure you get a good, impressive result that will have a lot of “impact”, you might find it necessary to discard some inconvenient data points, omitting from the results some open-access journals that rejected the paper.

Now you have your results, it’s time to spin them. Use sweeping, unsupported generalisations like “Most of the players are murky. The identity and location of the journals’ editors, as well as the financial workings of their publishers, are often purposefully obscured.”

Suppose you have a quote from the scientist whose experiences triggered the whole project, and he said something inconvenient like “If [you] had targeted traditional, subscription-based journals, I strongly suspect you would get the same result”. Just rewrite it to say “if you had targeted the bottom tier of traditional, subscription-based journals”.

Now you have the results you want — but how will you ever get through through peer-review, when your bias is so obvious? Simple: don’t submit your article for peer-review at all. Classify it as journalism, so you don’t need to go through review, nor to get ethical approval for the enormous amount of editors’ and reviewers’ time you’ve wasted — but publish it in a journal that’s known internationally for peer-reviewed research, so that uncritical journalists will leap to your favoured conclusion.

Last but not least, write a press-release that casts the whole study as being about the “Wild West” of Open-Access Publishing.

Everyone reading this will, I am sure, have recognised that I’m talking about  John Bohannon’s “sting operation” in Science. Bohannon has a Ph.D. in molecular biology from Oxford University, so we would hope he’d know what actual science looks like, and that this study is not it.

Of course, the problem is that he does know what science looks like, and he’s made the “sting” operation look like it. It has that sciencey quality. It discusses methods. It has supplementary information. It talks a lot about peer-review, that staple of science. But none of that makes it science. It’s a maze of preordained outcomes, multiple levels of biased selection, cherry-picked data and spin-ridden conclusions. What it shows is: predatory journals are predatory. That’s not news.

Speculating about motives is always error-prone, of course, but it it’s hard not to think that Science‘s goal in all this was to discredit open-access publishing — just as legacy publishers have been doing ever since they realised OA was real competition. If that was their goal, it’s misfired badly. It’s Science‘s credibility that’s been compromised.

Update (9 October)

Akbar Khan points out yet more problems with Bohannon’s work: mistakes in attributing where given journals were listed, DOAJ or Beall’s list. As a result, the sample may be more, or less, biased than Bohannon reported.

 

 

 

An extraordinary study has come to light today, showing just how shoddy peer-review standards are at some journals.

Evidently fascinated by Science‘s eagerness to publish the fatally flawed Arsenic Life paper, John Bohannon conceived the idea of constructing a study so incredibly flawed that it didn’t even include a control. His plan was to see whether he could get it past the notoriously lax Science peer-review provided it appealed strongly enough to that journal’s desire for “impact” (designed as the ability to generate headlines) and pandered to its preconceptions (that its own publication model is the best one).

So Bohannon carried out the most flawed study he could imagine: submitting fake papers to open-access journals selected in part from Jeffrey Beall’s list of predatory publishers without sending any of his fake papers to subscription journals, noting that many of the journals accepted the papers, and drawing the flagrantly unsupported conclusion that open-access publishing is flawed.

Incredibly, Science not only published this study, but made it the lead story of today’s issue.

It’s hard to know where Science can go from here. Having fallen for Bohannon’s sting, its credibility is shot to pieces. We can only assume that the AAAS will now be added to Beall’s list of predatory publishers.

Rolling updates

Here are some other responses to the Science story:

Yesterday I announced that our new paper on Barosaurus was up as a PeerJ preprint and invited feedback.

I woke up this morning to find its third substantial review waiting for me.

That means that this paper has now accumulated as much useful feedback in the twenty-seven hours since I submitted it as any previous submission I’ve ever made.

xx

Taylor and Wedel (2013b: figure 7). Barosaurus lentus holotype YPM 429, Vertebra S (C?12). Left column from top to bottom: dorsal, right lateral and ventral views; right column: anterior view. Inset shows displaced fragment of broken prezygapophysis. Note the narrow span across the parapophyses in ventral view, and the lack of damage to the ventral surface of the centrum which would indicate transverse crushing.

It’s worth reviewing the timeline here:

  • Monday 23rd September, 1:19 am: I completed the submission process.
  • 7:03 am: the preprint was published. It took less than six hours.
  • 10:52 am: received a careful, detailed review from Emanuel Tschopp. It took less than four hours from publication, and so of course less than ten from submission.
  • About 5:00 pm: received a second review, this one from Mark Robinson. (I don’t know the exact time because PeerJ’s page doesn’t show an actual timestamp, just “21 hours ago”.)
  • Tuesday 24th September, about 4:00 am: received a third review, this from ceratopsian-jockey and open-science guru Andy Farke.

Total time from submission to receiving three substantial reviews: about 27 hours.

It’s worth contrasting that with the times taken to get from submission to the receipt of reviews — usually only two of them — when going through the traditional journal route. Here are a few of mine:

  • Diplodocoid phylogenetic nomenclature at the Journal of Paleontology, 2004-5 (the first reviews I ever received): three months and 14 days.
  • Revised version of the same paper at PaleoBios, 2005 (my first published paper): one month and 10 days.
  • Xenoposeidon description at Palaeontology, 2006: three months and 19 days, although that included a delay as the handling editor sent it to a third, tie-breaking, reviewer.
  • Brachiosaurus revision at the Journal of Vertebrate Paleontology, 2008: one month and 11 days.
  • Sauropod neck anatomy (eventually to be published in a very different form in PeerJ) at Paleobiologyfive months and two days.
  • Trivial correction to the Brachiosaurus revision at the Journal of Vertebrate Paleontology, 2010: five months and 11 days, bizarrely for a half-page paper.

Despite the wide variations in submission-to-review time at these journals, it’s clear that you can expect to wait at least a month before getting any feedback at all on your submission at traditional journals. Even PeerJ took 19 days to get the reviews of our neck-anatomy paper back to us.

So I am now pretty such sold on the pre-printing route. As well as getting this early version of the paper out there early so that other palaeontologists can benefit from it (and so that we can’t be pre-emptively plagiarised), issuing a preprint has meant that we’ve got really useful feedback very quickly.

I highly recommend this route.

By the way, in case anyone’s wondering, PeerJ Preprints is not only for manuscripts that are destined for PeerJ proper. They’re perfectly happy for you to use their service as a place to gather feedback for your work before submitting it elsewhere. So even if your work is destined for, say, JVP, there’s a lot to be gained by preprinting it first.

I just read Mick Watson’s post Why I resigned as PLOS ONE academic editor on his blog opiniomics. Turns out his frustration with PLOS ONE is not to do with his editorial work but with the long silences he faced as an author at that journal when trying to get a bad decision appealed.

I can totally identify with that, though my most frustrating experiences along these lines have been with other journals. (yes, Paleobiology, I’m looking at you.) So here’s what I wrote in response (lightly edited from the version that appeared as a comment on the original blog).

There’s one thing that PLOS ONE could and should do to mitigate this kind of frustration: communicate. And so should all other journals.

At every step in the appeal process — and indeed the initial review process — an automated email should be sent to the author. So for the initial submission:

  1. “Your paper has been assigned an academic editor.”
  2. “Your paper has been sent out to a reviewer.”
  3. “An invited reviewer has declined to review; we will try another.”
  4. “An invited reviewer failed to accept or decline within two weeks; we will try another.”
  5. “A review has been submitted.”
  6. “A reviewer has failed to submit his report within four weeks; we are making contact again to ask for a quick response.”
  7. “A reviewer has failed to submit his report within six weeks; we have dropped that reviewer from this process and will try another.”
  8. “All reviews are in; the editor is considering the decision.”
  9. Decision letter.

And for the appeal:

  1. “Your appeal has been noted and is under consideration.”
  2. “We have contacted the original handling editor.”
  3. “The original handling editor has responded.”
  4. “The original handling editor has failed to respond after four weeks; we are escalating to a senior editor.”
  5. [perhaps] go back into some of all of the submission process.
  6. Decision letter.

Most if not all of these stages in the process already have workflow logic in the manuscript-handing system. There is no reason not to send the poor author emails when they happen — it’s no extra work for the editor or reviewers.

Speaking as the veteran of plenty of long-drawn-out silences from journals that I’ve submitted to, I know that getting these messages would have made a big difference to me.

Last October, we published a sequence of posts about misleading review/reject/resubmit practices by Royal Society journals (Dear Royal Society, please stop lying to us about publication times; We will no longer provide peer reviews for Royal Society journals until they adopt honest editorial policies; Biology Letters does trumpet its submission-to-acceptance time; Lying about submission times at other journals?; Discussing Biology Letters with the Royal Society). As noted in the last of these posts, the outcome was that I had what seemed to be a fruitful conversation with Stuart Taylor, Commercial Director of the society.

Then things went quiet for some time.

On 8 May this year, I emailed Stuart to ask what progress there had been. At his request Phil Hurst (Publisher, The Royal Society) emailed me back on 10 May as follows:

Dear Mike

Stuart has asked me to update you on the changes we have made following your conversation last year.

We have reviewed editorial procedures on Biology Letters. Further to this, we now provide Editors with the additional decision option of ‘revise’. This provides a middle way between ‘reject and resubmit’ and ‘accept with minor revisions’. Editors use all three options and it is entirely at their discretion which they select. ‘Revised’ papers retain the original submission date and we account for this in our published acceptance times.

In addition, we now publicise ‘first decision’ times rather than ‘first acceptance’ times on our website. We feel this is more meaningful as it gives prospective authors an indication of the time, irrespective of decision.

The first thing to say is, it’s great to see some progress on this.

The second thing is, I must apologise for my terrible slowness in reporting back. Phil emailed me again on 17 June to remind me to post, and it’s still taken me more than another month.

The third thing is, while this is definitely progress, it doesn’t (yet) fix the problem. That’s for two reasons.

The first problem is that so long as there is a “reject and resubmit” option that does not involve a brand new round of review (like a true resubmission), there is still a loophole by which editors can massage the journals’ figures. Of course, there is nothing wrong with “reject and resubmit” per se, but it does have to result in the resubmission being treated as a brand new submission — it can’t be a fig-leaf for what are actually minor revisions, as in the paper that first made me aware of this practice.

So I would urge the Royal Society either to get rid of the R&R option completely, replacing it with a simple “reject”; or to establish firm, explicit, transparent rules about how such resubmissions are treated.

The second problems is with the reporting. It’s true that the home pages of both Proc. B and Biology Letters do now publicise “Average receipt to first decision time” rather than the misleading old “Average receipt to acceptance time”. This is good news. Proc. B (though for some reason not Biology Letters) even includes a link to an excellent and very explicit page that gives three times (receipt to first decision, receipt to online publication and final decision to online publication) for five journals, and explains exactly what they mean.

Unfortunately, individual articles still include only Received and Accepted dates. You can see examples in recent papers both at Proc. B and at Biology Letters. As far as I can tell, there is no way to determine whether the Received date is for the original submission, or (as I can’t help but suspect) the minor revision that is disguised as a resubmission.

The solution for this is very simple (and was raised when I first talked to Stuart Taylor back in October): just give three dates: Received, Revised and Accepted. Then everything is clear and above board, and there is no scope for anyone to suspect wrongdoing.

Here at SV-POW!, we are an equal-opportunity criticiser of publishers: SpringerPLOS, Elsevier, the Royal Society, Nature, we don’t care. We call problems as we see them, where we see them. Here is one that has lingered for far too long. PLOS ONE’s journal information page says:

Too often a journal’s decision to publish a paper is dominated by what the Editor/s think is interesting and will gain greater readership — both of which are subjective judgments and lead to decisions which are frustrating and delay the publication of your work. PLOS ONE will rigorously peer-review your submissions and publish all papers that are judged to be technically sound.

Which is as we would expect it to be. But their reviewer guidelines page gives more detail as follows (emphasis added):

[Academic Editors] can employ a variety of methods, alone or in combination, to reach a decision in which they are confident:

  • They can conduct the peer review themselves, based on their own knowledge and experience
  • They can take further advice through discussion with other members of the editorial board
  • They can solicit reports from further referees

As has been noted in comments on this blog, this first form, in which the editor makes the decision alone, is “unlike any other first-tier academic journal”. When I submitted my own manuscript to PLOS ONE a few weeks ago, I did it in the expectation that it would be reviewed in the usual way, by two experts chosen by the editor, who would then use those reviews in conjunction with her own expertise to make a decision. I’d hate to think it would go down the easier track, and so not be accorded the recognition that a properly peer-reviewed article gets. (Merely discussing with other editors would also not constitute proper peer-review in many people’s eyes, so only the third track is really the whole deal.)

The problem here is not a widespread one. Back when we first discussed this in any detail, about 13% of PLOS ONE papers slipped through on the editor-only inside lane. But more recent figures (based on the 1,837 manuscripts that received a decision between 1st July and 30th September 2010) say that only 4.2% of articles take this track. Evidently the process was by then in decline; it’s a shame we don’t have more recent numbers.

But the real issue here is lack of transparency. Four and half years ago, Matt said “I really wish they’d just state the review track for each article–i.e., solo editor approved, multiple editor approved, or externally reviewed [...] I also hope that authors are allowed to preferentially request ‘tougher’ review tracks”.

It seems that still isn’t done. Looking at this article, which at the time of writing is the most recent one published by PLOS ONE, there is a little “PEER REVIEWED” logo up at the top, but no detail of which track was taken. PLOS themselves evidently take the line that all three tracks constitute peer-review, as “Academic Editors are not employees [...] they are external peer reviewers“.

So I call on PLOS ONE to either:

A. eliminate the non-traditional peer-review tracks, or

B1. Allow submitting authors to specify they want the traditional track, and

B2. Specify explicitly on each published paper which track was taken.

“The benefit of published work is that if they have passed the muster of peer review future researchers can have faith in the results”, writes a commenter at The Economist. Such statements are commonplace.

I couldn’t disagree more. Nothing is more fatal to the scientific endeavour than having “faith” in a previously published result — as the string of failed replications in oncology and in social psychology is showing. See also the trivial but crucial spreadsheet error in the economics paper that underlies many austerity policies.

Studies have shown that peer-reviewers on average spend about 2-3 hours in evaluating a paper that’s been sent their way. There is simply no way for even an expert to judge in that time whether a paper is correct: the best they can do is say “this looks legitimate, the authors seem to have gone about things the right way”.

Now that is a useful thing to be able to say, for sure. Peer review is important as a stamp of serious intent. But it’s a long way from a mark of reliability, and enormous damage is done by the widespread assumption that it means more than it does.

Remember: “has passed peer review” only really means “two experts have looked at this for a couple of hours, and didn’t see anything obviously wrong in it”.

 

 

Note. I initially wrote this as a comment on a pretty good article about open access at The Economist. That article is not perfect, but it’s essentially correct, and it makes me happy that these issues are now mainstream enough that it’s no longer a surprise when they’re covered by as mainstream an outlet as The Economist.

Follow

Get every new post delivered to your Inbox.

Join 377 other followers