Versions or visions
Peter Banks made some thought provoking comment on the liblicence-L listserv last week. Peter is a past president of the Society of National Association Publications. In general he is concerned that Open Access publishing will undermine conventional peer review. In his posting he talks about some the challenges he sees for a review system like that we are using on PLoS ONE. Peter brings up such valid and concrete points that I thought it was worth reposting them here.
Peter’s first point is:
Versioning. A manuscript becomes less like an inanimate object, frozen in time, than a living organism, constantly being modified by the author in response to criticism. There should be some way of tracking the evolution of the author’s ideas.
I don’t really have a disagreement with this. There will definitely be a need to track the evolution of a paper. Wikipedia does this very effectively and their entries are altered far more frequently than PLoS ONE‘s are likely to be. In fact for the time being we don’t plant to have papers being constantly modified. Rather there will one version of a paper (also archived in PubMedCentral) with subsequent annotations and comments, whether by the author or others, being separately citable additions.
Determining the canonical version. One of the traditional functions of publishing is determining and certifying the canonical version of a manuscript. Will there be such a function in the future, and, if so, who will be responsible for it? Will scientific authors adopt the model of Whitman’s Leaves of Grass, revising manuscripts from cradle to grave? Will we sing the body of literature electric?
Essentially this is a revised version of the previous problem. When a piece of work is cited it is very important to know what is being cited. I’m not sure that a truly canonical version is needed though, in science or in anything else, just a way of making a precise citation to a precise version. Canonical versions are something imposed by the act of printing. Even so there are plenty of different versions of the works of Shakespere and Chaucer, while the revision of poems by poets is commonplace. And of course F Scott Fitzgerald completely reordered “Tender is the Night” after its publication, twice!
A time will come when it will be possible the revise published papers, soon I hope, but Peter is right that the problems of how to cite and archive these versions need solving for this to happen. However I would rather tackle such problems in order to make scientific papers a better reflection of the scientific process than use them as an excuse to maintain the current, imperfect, system. The fact is that scientists’ ideas and opinions do change, so why shouldn’t their papers reflect that?
Synchronizing versions. The paper may exist in many places, in PMC, in institutional repositories, at the publisher’s site. Is there, or should there be, some mechanism of synchronizing these versions with what the author considers to be the up-to-date version?
Again a restatement of the problem of versioning but this aspect is already with us. Can you be sure that a paper on a researchers website or in an institutional repository is the same as the ‘published’ version? Especially when many ‘conventional’ publishers don’t allow the deposition of that published version but only the ‘final accepted’ version submitted by the author. Of course, when papers are published Open Access there is no need for there to be more than one version, as everyone has access to it.
Liability and safety. Should papers in clinical medicine be made freely available, as PLoS seems poised to do? Patients do crazy things even with reviewed literature. Is there liability if someone is injured by inappropriately modifying treatment or self-care on the basis of an unreviewed article? Of course, one could make the argument that patients themselves are in the best position to offer real-world feedback on the safety and efficacy of drugs, since they themselves take them.
Leaving aside how patronizing this is (“we must restrict the general publics access to information for their own good”) it misses an essential point. No-one is suggesting having an unreviewed literature, just that the review can be done in a different way which does not require us to assume that three referees will never make mistakes. However for clarity I guess I need to state this one more time:
PLoS ONE is not publishing papers without review. Submissions are reviewed before publication by Academic Editors who oftentimes consult with external referees. Where PLoS ONE differs is that this review concentrates on technical rather than subjective issues. Also review does not cease with publication (as with many other journals) but continues in Open form post publication. If anything PLoS ONE is exposing papers to more review than other journals, not less.
It is good to have people like Peter out there raising questions like these. It is even better that he is raising them on a listserv available to all. That way everyone interested can follow the discussion and debate. It might even mean that these interested parties will be able to work together to find solutions to the problems Peter brings up faster than if they worked away at them in isolation.
Now wouldn’t it be good if more of science was able to operate like that.
I sat in on a special discussion at the annual Society for Neuroscience meeting on open access publishing this morning. The panel was composed of the following people:
* Chair: David van Essen
* Mark Doyle: Assistant Director, Journal Information Systems, American Physical Society
* Heather Joseph: Executive Director, Scholarly Publishing and Academic Resources Coalition (SPARC)
* Donald Kennedy: Former head of the FDA, Editor in Chief, Science
* Michael Keller: Stanford University Librarian, Director of Academic Information Resources, Publisher, HighWire Press, and Publisher, Stanford University Press
* Jasna Markovac: Senior Vice President and Director of Development, Elsevier
* Diane Sullenberger: Executive Editor, PNAS
The consensus amongst this group of speakers seems to have been that the future of publishing will look different, but no one seems too ready to lunge forward quite yet. There was a lot of discussion about copyrights, version control, and, of course, money.
Diane Sullenberger brought up the fantastic idea of “article ranking”, much like how Amazon ranks their sales items, as a novel use of the internet in publishing.
All in all, despite some obvious hesistane, the overall feeling among the panel seemed to be one of inevitability: open-access is important (especially to the upcoming generation of scientists), and the internet is simply much faster and much more interactive, and thus more suited to modern research.
It really is a pity that versioning of manuscripts is not seen as a powerful addition to the scientific publishing process.
Particularly, the publications of new hypothesis and possibly controversial findings would be better served if new data or insight would be added to a manuscript rather than into new documents.
Do you plan to explore versioning of manuscripts beyond what you describe above?
That would be a very big yes. We very much want to allow papers to be living documents. We want authors to be able to publish revisions and updates of their previous work when appropriate. Within PLoS ONE that would be pretty easy to do but we are holding back at the moment only because of the problems that Peter has highlighted and because getting this thing launched in current form is tough enough. At the moment I’m hoping to have a plan for updating and revising published papers in place within 6 months of PLoS ONE‘s launch.
Glad you had a good time at SfN and thanks for the report on the panel. Sandra Aamodt has also given her take on the panel over on Action Potential. There are also longer notes on the panel from Jake Young at Pure Pedantry (and his notes have been sumarised on Cognitive Daily).
Nice to hear that the idea of Article Ranking was mentioned. We are developing that for PLoS ONE. I don’t think we will quite have it built in time for launch but it should be up and running within a few weeks.
Our plan was to have, in addition to an overall rating, a number of sub-categories for things like, originality of approach, unexpectedness of the advance, how thought provoking the paper is, how stylish the presentation and how secure the results appear to be. We are still working on what to call such measures; I’m in favour of things like Truth, Beauty, Strangeness, Charm.