Skip to content

When you choose to publish with PLOS, your research makes an impact. Make your work accessible to all, without restrictions, and accelerate scientific discovery with options like preprints and published peer review that make your work more Open.

PLOS BLOGS The Official PLOS Blog

The multiple uses of peer review: an interview with Marcel LaFlamme

Note: Review Commons posted this blog earlier this month. PLOS is cross-posting it to amplify the discussion of our participation in Review Commons, as well as other PLOS initiatives.

Author: Thiago Carvalho

In December of 2022, the Howard Hughes Medical Institute (HHMI) hosted a meeting on “Recognizing Preprint Peer Review” at their Janelia Research Campus in Virginia. Review Commons was represented by project leader Thomas Lemberger and managing editor Sara Monaco (HHMI supports Review Commons). The meeting also featured a look at our platform from the perspective of affiliate journals, presented by Marcel LaFlamme, Open Research Manager at the Public Library of Science (PLOS). LaFlamme originally trained as a cultural anthropologist – during his postdoc, he studied how scientific organizations can promote open science and collaboration. As LaFlamme told us, PLOS is “increasingly thinking about how to advance open science beyond the published article.” We discussed how peer reviewed preprints can contribute to this goal, the different uses (and users) of peer review, automated reviewing and more.

Could you tell us a bit about the pros and cons of Review Commons from the perspective of affiliate journals?

One of the advantages of Review Commons from PLOS’s perspective is efficiency. We’re always asking ourselves: How can we provide better service to authors? How do we make that time from submission through decision to publication quicker? This is something that authors tell us is really important and that Review Commons helps us to achieve. Decreasing the burden on the reviewer pool is a second thing that benefits both journals and the scientific community; trying to cut down on those endless cycles of resubmission is something that we think is good for science. One fewer request that our reviewers are getting from a PLOS journal frees up capacity elsewhere in the reviewer pool to do other things.

Trying to cut down on those endless cycles of resubmission is something that we think is good for science.

Marcel LaFlamme

Those are the two concrete advantages of Review Commons, in the here and now. I think a third advantage, one that is still coming into being, is the way that Review Commons begins to decouple evaluation from publication or dissemination of research. That’s something that we are increasingly talking about at PLOS: what does it mean to decouple those functions and to configure them in different ways?

At the Janelia meeting, I also talked a little bit about challenges for editors. The big take-away was that an editor is not an editor is not an editor. We really need to understand editor concerns and challenges in the context of the editorial model and workflow of the journal that they work on. I think the PLOS portfolio is a good place to look at that because we have journals that are edited in quite different ways: by professional staff editors, by active researchers who are serving as editors-in-chief, and – for PLOS ONE – by Associate Editors who are researchers and who may serve as handling editor for just one paper a year. We are really paying attention to the differences in those positionalities and the different concerns that may emerge from those viewpoints.

Do you think there are additional ways that PLOS could contribute to Review Commons?

We are very open to thinking about what that looks like. There’s one conversation that affiliate journals are having around the reviewer pool. To date, EMBO Press has anchored the initial review process for Review Commons. This leverages the excellent reviewer pool that EMBO has at its disposal, but it also incorporates the limitations of scope of EMBO (or any other single publisher). So, there’s a conversation about how to broaden that initial reviewer pool to access different types of expertise and whether PLOS could have role in that.

There are broader questions, too. If we are starting to think about peer review as a service—which I think is one way we can talk about Review Commons — then we might also ask, to whom is it a service? So far, Review Commons has been a service to authors on the one hand, to have this refereed preprint, but then also a service to the affiliate journals. At PLOS we think these two pieces are valuable but would also ask: who else might peer review as a service be directed towards? Could that be funders? Could that be institutions? The exact form that this takes and how PLOS or Review Commons fits into it… I think we’re still in the early days of figuring that out. But Review Commons has posed a very important question by offering peer review as a service and cinching it up with existing editorial processes, which is something that not all preprint review services have been as successful at doing to date.

Do different editors assess the quality of reviews in different ways, and what does this say about how we should assess the quality of peer review overall?

As we see more of this decoupling of dissemination and evaluation, we may be asking peer review to do new things. We may be asking it to contextualize research for non-expert audiences who wouldn’t be able to read a manuscript and draw the same or similar conclusions as the expert reviewers. In a world where peer review is not only being used to make an up-or-down editorial decision but also to contextualize work that exists in public and is there for people to engage with, then I think what quality looks like — if that’s what we’re asking peer review to do — is a bit different from what it looked like in that legacy context.

As we ask peer review to do more and more things, should we also be looking at new ways to recognize the work of reviewers?

Another way to ask that question is: should outputs other than published articles and big grants count toward research assessment and the development of scientific careers? I think the answer to that is definitely yes. One thing that PLOS is increasingly interested in from a policy perspective is how existing systems of research assessment that overvalue the research article and the big grant can serve as a structural barrier to open science. This is something that we are thinking about as an organization: how do we enable evidence-based decision making about how research assessment can change to properly reward and credit a broader range of research outputs?

Should outputs other than published articles and big grants count toward research assessment and the development of scientific careers? I think the answer to that is definitely yes.

Marcel LaFlamme

Peer review is one piece of that but sharing data and other outputs fall into this category as well. How you do this, how you credit reviews, is hard. There’s one approach that is about counting and weighting the number of reviews and how much different sorts of review should count. There is another approach that is about making the reviews themselves visible and enabling different kinds of assessment, both qualitative and scalable, machine-driven assessment of the text of the reviews themselves. PLOS has taken the latter approach for some time, by giving our authors and reviewers the option to make reviews public and to share reviewer identities. I think these forms of sharing, in and of themselves, are not reward or credit, but they may be a precondition for a proper system of reward or credit.

Can peer review be automated?

At PLOS we are interested in the complementarity of expert human peer review and automated approaches. Last year there was a commentary published on the question “is the future of peer review automated?” – a very provocative title. The conclusion of the piece—and its authors included folks who are building software and who have a stake in the answer to that question being “yes” — was that a complementary approach is what we need. The technology is nowhere near the point where the application of human judgment could be sidelined. But automation can be used to check compliance with guidelines and checklists, or to detect the presence of related research outputs; these are things that machine scoring can likely do better than a human editor who is doing them for a high volume of manuscripts, day in and day out.

Related Posts
Back to top