Many SV-POW! readers will already be aware that the entire editorial staff of the Elsevier journal Lingua has resigned over the journal’s high price and lack of open access. As soon as they have worked out their contracts, they will leave en bloc and start a new open access journal, Glossa — which will in fact be the old journal under a new name. (Whether Elsevier tries to keep the Lingua ghost-ship afloat under new editors remains to be seen.)

Today I saw Elsevier’s official response, “Addressing the resignation of the Lingua editorial board“. I just want to pick out one tiny part of this, which reads as follows:

The article publishing charge at Lingua for open access articles is 1800 USD. The editor had requested a price of 400 euros, an APC that is not sustainable. Had we made the journal open access only and at the suggested price point, it would have rendered the journal no longer viable – something that would serve nobody, least of which the linguistics community.

The new Lingua will be hosted at Ubiquity Press, a well-established open-access publisher that started out as UCL’s in-house OA publishing arm and has spun off into private company. The APC at Ubiquity journals is typically £300 (€375, $500), which is less than the level that Elsevier describe as “not sustainable” (and a little over a fifth of what Elsevier currently charge).

Evidently Ubiquity Press finds it sustainable.

You know what’s not sustainable? Dragging around the carcass of a legacy barrier-based publisher, with all its expensive paywalls, authentication systems, Shibboleth/Athens/Kerberos integration, lawyers, PR departments, spin-doctors, lobbyists, bribes to politicians, and of course 37.3% profit margins.

The biggest problem with legacy publishers? They’re just a waste of money.

Somehow this seems to have slipped under the radar: National Science Foundation announces plan for comprehensive public access to research results. They put it up on 18 March, two whole months ago, so our apologies for not having said anything until now!

This is the NSF’s rather belated response to the OSTP memo on Open Access, back in January 2013. This memo required all Federal agencies that spend $100 million in research and development each year to develop OA policies, broadly in line with the existing one of the NIH which gave us PubMed Central. Various agencies have been turning up with policies, but for those of us in palaeo, the NSF’s the big one — I imagine it funds more palaeo research than all the others put together.

So far, so awesome. But what exactly is the new policy? The press release says papers must “be deposited in a public access compliant repository and be available for download, reading and analysis within one year of publication”, but says nothing about what repository should be used. It’s lamentable that a full year’s embargo has been allowed, but at least the publishers’ CHORUS land-grab hasn’t been allowed to hobble the whole thing.

There’s a bit more detail here, but again it’s oddly coy about where the open-access works will be placed: it just says they must be “deposited in a public access compliant repository designated by NSF”. The executive summary of the actual plan also refers only to “a designated repository”

Only in the full 31-page plan itself does the detail emerge. From page 5:

In the initial implementation, NSF has identified the Department of Energy’s PAGES (Public Access Gateway for Energy and Science) system as its designated repository and will require NSF-funded authors to upload a copy of their journal articles or juried conference paper to the DOE PAGES repository in the PDF/A format, an open, non-proprietary standard (ISO 19005-1:2005). Either the final accepted version or the version of record may be submitted. NSF’s award terms already require authors to make available copies of publications to the Cognizant Program Officers as part of the current reporting requirements. As described more fully in Sections 7.8 and 8.2, NSF will extend the current reporting system to enable automated compliance.

Future expansions, described in Section 7.3.1, may provide additional repository services. The capabilities offered by the PAGES system may also be augmented by services offered by third parties.

So what is good and bad about this?

Good. It makes sense to me that they’re re-using an existing system rather than wasting resources and increasing fragmentation by building one of their own.

Bad. It’s a real shame that they mandate the use of PDF, “the hamburger that we want to turn back into a cow”. It’s a terrible format for automated analysis, greatly inferior to the JATS XML format used by PubMed Central. I don’t understand this decision at all.

Then on page 9:

In the initial implementation, NSF has identified the DOE PAGES system to support managing journal articles and juried conference papers. In the future, NSF may add additional partners and repository services in a federated system.

I’m not sure where this points. In an ideal world, it would mean some kind of unifying structure between PAGES and PubMed Central and whatever other repositories the various agencies decide to use.

Anyone else have thoughts?

Update from Peter Suber, later that day

Over on Google+, Peter Suber comments on this post. With his permission, I reproduce his observations here:

My short take on the policy’s weaknesses:

  • will use Dept of Energy PAGES, which at least for DOE is a dark archive pointing to live versions at publisher web sites
  • plans to use CHORUS (p. 13) in addition to DOE PAGES
  • requires PDF
  • silent on open licensing
  • only mentions reuse for data (pp. v, 18), not articles, and only says it will explore reuse
  • silent on reuse for articles even tho it has a license (p. 10) authorizing reuse
  • silent on the timing of deposits

I agree with you that a 12 month embargo is too long. But that’s the White House recommended default. So I blame the White House for this, not NSF.

To be more precise, PAGES favors publisher-controlled OA in one way, and CHORUS does it in another way. Both decisions show the effect of publisher lobbying on the NSF, and its preference for OA editions hosted by publishers, not OA editions hosted by sites independent of publishers.

So all in all, the NSF policy is much less impressive than I’d initially thought and hoped.

My new article is up at the Guardian. This time, I have taken off the Conciliatory Hat, and I’m saying it how I honestly believe it is: publishing your science behind a paywall is immoral. And the reasons we use to persuade ourselves it’s acceptable really don’t hold up.

Read Choose open access: publishing your science behind a paywall is immoral

Because for all that we rightly talk about the financial efficiencies of open access, when it comes right down to it OA is primarily a moral, or if you prefer idealogical, issue. It’s not really about saving money, though that’s a welcome side-effect. It’s about doing what’s right.

I’m expecting some kick-back on this one. Fire away; I’ll enjoy the discussion.

“But Mike”, you say, “What’s wrong with publishers making a profit?”

Nothing is wrong with publishers making a profit.

PLOS made an operating profit of 21.5% in 2011 (though they plough it back into their mission “to accelerate progress in science and medicine by leading a transformation in research communication”.) BioMed Central also makes a profit, and since they are a for-profit company they get to keep it, distribute it to shareholders, or what have you. Good on them.

If you can make money by publishing research, that’s great.

The issue is not publishers who make money. The issue is corporations that go by the title “publishers”, but which in fact make money by preventing publication.

Because “publish” means “make public”. The whole point of a publisher is to make things public. The reason the scientists of 30 years ago sent their papers to a publisher was because having a publisher print them on paper and ship them around the world was the most effective way to make them public. And subscriptions were the obvious way to pay for that work. But now that anything can be made public instantly — “Publishing is not a job any more, it’s a button”giving papers to a “publisher” that locks them behind a firewall is the opposite of publishing. It’s privating.

Yesterday we saw an appalling demonstration of why this is so important. The barrier-based textbook publisher Pearson found that in 2007 a teacher had posted a copy of the Beck Hopelessness Scale on his blog. It’s a 20-question list, intended to help prevent suicide, and totals 279 words. It was published in 1974, and Pearson holds the copyright, selling copies  for $120 — $6 per question, or 43¢ per word.

So naturally Pearson saw their profits being eaten into by the free availability of the Beck Scale. Naturally, rather than contacting the blog author, or the network that it’s part of, they sent a DMCA takedown notice to ServerBeach, who host the web server that the blog was on. And naturally ServerBeach shut down the entire site twelve hours later.

This site, Edublogs, is home to 1,451,943 teacher and student blogs. Yes, you read that right. One and a half million blogs.

So to recap: because a teacher five years ago posted a copy of 279-word, 38-year-old questionnaire that costs $120, the publisher shut down 1.5 million blogs. That works out at 0.008¢ per blog.

We could talk all day about all the things that went wrong here — the ludicrously unbalanced DMCA (“half a DeMoCrAcy”), the idiot response of ServerBeach — but I want to focus on one issue. The reason Pearson issued a DMCA takedown is because they make their money by preventing access. It’s the nature of the beast. If your business model is to prevent people from making things public, then this kind of thing is inevitable. Whereas it is literally impossible for PLOS or BMC ever to perpetrate this kind of idiocy because their business model is to make things public. When someone else takes a thing that they have made public and makes it more public, then great! No-one has to issue any DMCA takedowns!

And this is why there is a fundamental, unbridgeable divide between open-access publishers and barrier-based publishers. It’s why no amount of special programmes, limited-time zero-cost access options, reductions in subscription rates, access to back-issues and so on will ever really make any difference. The bottom line is that we want one thing — access to research — and barrier-based “publishers” want the exact opposite.

However nice they are, however much their hearts are in the right place, they want one thing and we want the opposite. And that just won’t do.

They’re going to have to go. All of them.

In a third “open letter to the mathematics community”, Elsevier have announced that, for “the primary mathematics journals”, they now offer free access to all articles over four years old. The details page shows that 53 journals are involved.

I like to give credit where it’s due, and this is a significant move. It’s much more important than the initiatives we hear of from time to time when access to various journals is offered for a limited window: it means there is a substantial body of work that will now be freely and permanently available.

In a comment on John Baez’s Google+ post, Joerg Fliege comments:

One should also mention that opening up access to a handful of older issues of math journals will not affect the bottom line of Elsevier’s revenue much. They are giving something away that, in the greater scheme of things, has essentially a business value near 0.

How kind of them.

But I think this is unnecessarily cynical and negative. A move like this should be judged not on what it costs Elsevier to do, but on the benefit that it gives the research community. If they can find things to do that cost them little or nothing but provide a real benefit, then that’s all to the good — as I argued in the How Elsevier Can Save Itself posts [part 0, part 1, part 2, part 3]. They should not be criticised for that!

That said, Baez does raise a crucial question in that Google+ post:

Why just math journals? Because we’re the ones who are making the most noise! Folks from many other sciences have joined the boycott – but you need some leaders in your field to get aggressive if you want to get Elsevier to do you a favor like this.

An important challenge for Elsevier right now is to prove that they are really making an effort to contribute to the progress of research across the board, rather then just trying to buy off the mathematical community which has caused them the most irritation up to this point.

Can they meet that challenge?

More of my thoughts on the Finch Report; you may wish to read part 1 first. As before I will be quoting from the executive summary (11 pages) rather than the full report (140 pages).

Changing culture

Section 4 (What needs to be done, on page 7) begins as follows:

Implementing our recommendations will require changes in policy and practice by all stakeholders. More broadly, what we propose implies cultural change: a fundamental shift in how research is published and disseminated.

This is a crucial point. Cultural change is exactly what’s needed — not just in how research is published, as noted in the report, but even more importantly in how it’s evaluated. In particular, we’re going to have to stop assessing research by what journal it’s published in, and start looking at the value of the actual research.

This is already important — it always has been, because the use of journal reputation as a proxy for research quality has always been appallingly error-prone and misleading. But it’s going to become more and more important as open access grows more prevalent and a greater proportion of research moves into OA megajournals such as PLoS ONE, Sage Open and NPG’s Scientific Reports. These things are just too darned big to have a meaningful reputation. If you try to judge a PLoS ONE paper on the basis of the journal’s impact factor (4.411), you’ll quickly run aground: that’s a weak IF for a medic, but very strong for a palaeontologist. PLoS ONE is increasingly one of the journals of choice for palaeo papers, but it’s looked down on in astronomy. A question like “what’s the quality of PLoS ONE papers” is as about as meaningful as “what’s the price of property in London?” It depends on whether you’re talking about Knightsbridge or Peckham.

This is one of the fringe benefits of the shift towards megajournals: it’s going to make everyone see just how fatuous judgement by impact factor is. We’re going to see the end of comments on Guardian articles that say “my department actively discourages us from publishing in journals with IF less then 6.0”.

Unilateral action by the UK

Well, I seem to have gone off on a bit of a tangent there. Back to the Finch Report, pages 7 and 8:

Key actions: overall policy and funding arrangements

v. Renew efforts to sustain and enhance the UK’s role in international discussions on measures to accelerate moves towards open access.

This is also important. I like it that the Finch Report seems generally to advocate that we in the UK should lead the way in open access. But it’s also true that if we push on ahead of other countries, implementing mandatory open access unilaterally, we’ll be at a disadvantage compared with other countries: they will get our research for free, but we won’t get theirs till they follow suit.

And I am fine with that. Obviously it can’t continue indefinitely, but if taking a short-term financial hit is what it takes to get the world onside, that’s cool. Doing science costs money. And you haven’t done science till you’ve published your result. And you haven’t really published it until everyone can get it.

Non-commercial use

Now we come to a part of the report that I am really unhappy with. This is from the list in the section Key actions: publication in open access and hybrid journals, on page 8:

x. Extend the range of open access and hybrid journals, with minimal if any restrictions on rights of use and re-use for non-commercial purposes.

There’s that non-commercial clause again. This is worrying. If the Finch Report really is about what’s best for the country and for the world, there is no justification for NC. We want businesses to thrive as well as universities. And there are more businesses in the world than publishers! Cameron Neylon said this best in his Finch Report review, Good steps but missed opportunities:

This fudge risks failing to deliver on the minister’s brief, to support innovation and exploitation of UK research. This whole report is embedded in a government innovation strategy that places publicly funded knowledge creation at the heart of an effort to kick start the UK economy. Non-commercial licences can not deliver on this and we should avoid them at all costs.

That’s exactly right.

I will have more to say on this in a future post.

The role of repositories

There is a section headed Key actions: repositories on page 9. Tellingly, it has only two points, compared with 5, 6 and 5 for the other three key actions sections. Here is the second of those points:

xviii. Consider carefully the balance between the aims of, on the one hand, increasing access, and on the other of avoiding undue risks to the sustainability of subscription-based journals during what is likely to be a lengthy transition to open access. Particular care should be taken about rules relating to embargo periods. Where an appropriate level of dedicated funding is not provided to meet the costs of open access publishing, we believe that it would be unreasonable to require embargo periods of less than twelve months.

Who is the “we” that believes a six-month embargo period would be “unreasonable”?

Obviously not Research Councils UK, who recently stated “Ideally, a paper should become Open Access as soon as it is published. However […] the Research Councils will accept a delay of up to six months in the case where no ‘Article Processing Charge’ is paid.”

Obviously not the Wellcome Trust, whose policy states that it: “requires electronic copies of any research papers that have been accepted for publication in a peer-reviewed journal, and are supported in whole or in part by Wellcome Trust funding, to be made available through PubMed Central (PMC) and UK PubMed Central (UKPMC) as soon as possible and in any event within six months of the journal publisher’s official date of final publication”.

No. “We” can only mean the publishers’ lobby. They hate repositories, and were somehow allowed to nobble all references to Green OA in the report. Don’t believe me? Search for the word “green” in the executive summary: zero hits in eleven pages. Try it in the main report? Three hits in 140 pages: one on page 16, parenthetical (“… a version of a publication through a repository (often called green open access)”), one on page 120, a repeat (“… a version of a publication via a repository, often after an embargo period. This strand is often called green open access”) and one on page 130 (an unrelated mention of the HM Treasury Green Book).

This is one of the most disturbing aspects of the report, and I can see why Stevan Harnad is irate.

Let us move on to happier matters.

Transparency and competition

From page 10:

One of the advantages of open access publishing is that it brings greater transparency about the costs, and the price, of publication and dissemination. The measures we recommend will bring greater competition on price as well as the status of the journals in which researchers wish to publish. We therefore expect market competition to intensify, and that universities and funders should be able to use their power as purchasers to bear down on the costs to them both of APCs and of subscriptions.

I think this is a very important and much neglected point, and it makes me want to write a blog on why author-pays is inevitably more economical than reader-pays. (Short version: granularity of transactions is smaller, so the market is efficient and real competition comes into play, as we are seeing with the launch of PeerJ.)


From page 10:

Our best estimate is that achieving a significant and sustainable increase in access, making best use of all three mechanisms, would require an additional £50-60m a year in expenditure from the HE sector: £38m on publishing in open access journals, £10m on extensions to licences for the HE and health sectors and £3-5m on repositories.

*Cough* *splutter* Hey, what now?

So let’s get this straight. Transitioning from subscription to open access is going to cost us £10M more on licences than we’re already paying? Rather than, say, £10M less, as we start cancelling subscriptions we don’t need?

This seems to be pure fantasy on the part of the publishers.

Not only that, the £38M is based on an “average APC” of … get ready … £1,500. (This is not stated in the executive summary, but it’s on page 61 of the full report.) That number is a frankly ludicrous over-estimate, being nearly double the $1350 =~ £870 charged by PLoS ONE, and nearly three times as much as the $906 =~ £585 found as the average of 100,697 articles in 1,370 journals by Solomon and Björk (2012).

So based on this a more realistic APC, the £38M comes down to £14.8M. Throw out the absurd extra £10M that publishers want for extra subscription licences, and the total cost comes from from “£50-60M per year” to about £19M. Still not chicken-feed, but a lot less painful, even in the short term.

And finally …

The report finishes on an upbeat note (page 10) and so do we:

We believe that the investments necessary to improve the current research communications system will yield significant returns in improving the efficiency of research, and in enhancing its impact for the benefit of everyone in the UK.

Yes. Absolutely right. Even if we only thought about academia, the financial case for open access would be unanswerable. But there is more to the world than academia, and the real benefits will be seen elsewhere.


Anyone who is not yet heartily sick of the Finch Report can read lots more analysis in the articles linked from Bjorn Brembs’s article The Finch Report illustrates the new strategy wars of open access at the LSE’s Impact blog.

As you’ll know from all the recent AMNH basement (and YPM gallery) photos, Matt and I spent last week in New York (with a day-trip to New Haven). The week immediately before that, I spent in Boston with Index Data, my day-job employers. Both weeks were fantastic — lots of fun and very productive. But they did mean that between the scheduled activities and getting a big manuscript finally submitted, I’ve been very much out of touch, and I’m only now catching up with what’s happened in The Rest Of The World while I’ve been sequestered in various basements photographing sauropod vertebrae.

Matt measuring the width across the preacetabular lobes of the fused ilia on the sacrum of the referred “Morosaurus” sp. specimen, AMNH 690, illustrated by Osborn (1094: fig 2A-E). Behold the wonder that is the Big Bone Room.

The two big events in the Open Access world while I was away were the launch of PeerJ and the release of the Finch Report. I’ll write about PeerJ in future, but today I want to say a few words on the Finch Report. I’ve deliberately not read anyone else’s coverage of the report yet, in the hope of forming an uninfluenced perspective. I’ll be very interested, once I’ve finished writing this, to see what people like Cameron Neylon, Stephen Curry and Peter Murray-Rust have said about it.

What is the Finch Report, you may ask? The introduction explains:

The report recommends actions which can be taken in the UK which would help to promote much greater and faster access, while recognising that research and publications are international. It envisages that several different channels for communicating research results will remain important over the next few years, but recommends a clear policy direction in the UK towards support for open access publishing.

So the first point to make is that it’s very good news about the overall direction. In fact, it would be easy to overlook this. The swing that’s happened over the last six months has been slow enough to miss, but the cumulative effect of myriad small shifts has been enormous: where there used to be a lot of skepticsm about open access, pretty much everyone is now accepting that it’s inevitable. (See this compilation of quotes from US congressmen, UK government ministers, publishers, editors and professors.) The questions now are about what form ubiquitous open access will take, not whether it’s coming. It is.

But there’s an oddity in that introduction which is a harbinger of something that’s going to be a recurring theme in the report:

[Open access publishing] means that publishers receive their revenues from authors rather than readers, and so research articles become freely accessible to everyone immediately upon publication.

People who have been following closely will recognise this as the definition of Gold Open Access — the scheme where the author (or her institution) pays a one-time publication fee in exchange for the publisher making the result open to the world. The other road, known as Green OA, is where an author publishes in a subscription journal but deposits a copy of the paper in a repository, where it becomes freely available after an embargo period, typically six to twelve months. That Green OA is not mentioned at this point is arguably fair enough; but that OA is tacitly equated with Gold only feels much more significant. It’s as though Green is being written out of history.

More on this point later.

Green and Gold Chrysogonum virginianum Flower 3008 by Derek Ramsey, from Wikimedia Commons.

The actual report is 140 pages long, and I don’t expect it to be widely read. But The executive summary is published as a separate document, and at 11 pages is much more digestible. And its heart is in the right place, as this key quote from p4 tells us:

The principle that the results of research that has been publicly funded should be freely accessible in the public domain is a compelling one, and fundamentally unanswerable.

Amen. Of course, that is the bedrock. But more practically, on page 3, we read:

Our aim has been to identify key goals and guiding principles in a period of transition towards wider access. We have sought ways both to accelerate that transition and also to sustain what is valuable in a complex ecology with many different agents and stakeholders.

I do want to acknowledge that this is a hard task indeed. It’s easy to pontificate on how things ought to be (I do it all the time on this blog); but it’s much harder to figure out how to get there from here. I’m impressed that the Finch group set out to answer this much harder question.

But I am not quite so impressed at their success in doing so. And here’s why. In the foreword (on page 2) we read this:

This report … is the product of a year’s work by a committed and knowledgeable group of individuals drawn from academia, research funders and publishing. … Members of the group represented different constituencies who have legitimately different interests and different priorities, in relation to the publication of research and its subsequent use.

My most fundamental issue with the report, and with the group that released it, is this. I don’t understand why barrier-based publishers were included in the process. The report contains much language about co-operation and shared goals, but the truth as we all know is that publishers’ interests are directly opposed to those of authors, and indeed of everyone else. Who does the Finch Group represent? I assumed the UK Government, and therefore the citizens of the UK — but if it’s trying to represent all the groups involved in academic activity, there’s a conflict of interests that by its nature must prevent everyone else from clearly stating what they want from publishers.

This isn’t an idle speculation:  the report itself contains various places where is suddenly says something odd, something that doesn’t quite fit, or is in conflict with the general message. It’s hard not to imagine these as having been forced into the report by the publishers at the table (according to the membership list, Bob Campbell, senior publisher at Wiley Blackwell; Steve Hall, managing director of IoP Publishing; and Wim van del Stelt, executive VP of corporate strategy at Springer). And I just don’t understand why the publishers were given a seat at the table.

And so we find statements like this, from p5:

The pace of the transition to open access has not been as rapid as many had hoped, for a number of reasons. First, there are tensions between the interests of key stakeholders in the research communications system. Publishers, whether commercial or not-for-profit, wish to sustain high-quality services, and the revenues that enable them to do so.

This is very tactfully put, if I might say so. Distilled to its essence, the is saying that while the UK government, universities, libraries, hospitals and citizens want open access, publishers want to keep the walls that give them their big profits. The bit about “high-quality services” is just a fig-leaf, and a rather transparent one at that. Reading on, still in p5:

There are potential risks to each of the key groups of players in the transition to open access: rising costs or shrinking revenues, and inability to sustain high-quality services to authors and readers.

Those all sounds like risks to the same group: publishers. And again, there is no reason I can see why these need be our problem. We know that publishing will survive in a form that’s useful to academia — the success of BioMed Central and PLoS, and the birth of ventures like eLife and PeerJ show us that — so why would it be the any part of our responsibility to make sure that the old, slow, expensive, barrier-based publishers continue to thrive?

Reading on:

Most important, there are risks to the intricate ecology of research and communication, and the support that is provided to researchers, enabling them to perform to best standards, under established publishing regimes.

I don’t understand this at all. What support? Something that publishers provide? I just don’t get what point is being made here, and can only assume that this “intricate ecology” section is one of the passages that the publishers had inserted. I wonder whether it’s a subtle attempted land-grab, trying to take the credit for peer-review? At any rate, it’s wildly unconvincing.

And so we come to the actual recommendations of the report. There are ten of these altogether, on pages 6-7, and they begin as follows:

We therefore recommend that:

i. a clear policy direction should be set towards support for publication in open access or hybrid journals, funded by APCs, as the main vehicle for the publication of research, especially when it is publicly funded;

So there it is: The Finch Report says that Gold Open Access is the way forward.

And despite my carping about publishers’ involvement in the process, and their dilution of the output, I’m pretty happy with that recommendation. Of course, there are a hundred questions about who will pay for OA (though they will be considerably less pressing in a world where $99 buy you all the publishing you can eat at PeerJ). Lots of details to be ironed out. But the bottom line is that paying at publication time is a sensible approach. It gives us what we want (freedom to use research), and provides publishers with a realistic revenue stream that, unlike subscriptions, is subject to market forces. (I will enlarge on this point in a subsequent post.)

To briefly summarise the ten recommendations:

i. Overall policy should be to move to Gold OA.
ii. Funders should provide money for Gold OA charges.
iii. Re-use rights, especially non-commercial, should be provided.
iv. Funding of subscriptions should continue during transition.
v. Walk-in access should be “pursued with vigour”
vi. We must work together to negotiate and fund licences.
vii. Subscription price negotiations should take into account the forthcoming transition to OA.
viii. Experimentation is needed on OA monographs.
ix. Repositories should be developed in “a valuable role complementary to formal publishing”.
x. Funders should be careful about mandating short embargo limits.

Mostly good stuff. I’m not happy about the emphasis on non-commercial forms of re-use in (iii), and of course walk-in access (v) is spectacularly dumb. (vi) seems a bit vacuous, but harmless I suppose — I’m not sure what point it’s trying to make.  (ix) is quietly sinister in its drive-by relegation of repositories to a subsidiary role, and of course (x) is pure publisher-food. Still, even with these caveats, the overall thrust is good.

Well, this has already gone on much longer than I intended, so I will leave further analysis for next time. For now, I am inclined to award the Finch Report a solid B+. I’ll be interested to see how that assessment stands up when I’ve read some other people’s analysis.


Get every new post delivered to your Inbox.

Join 3,492 other followers