PLOS Note: we use this blog, on occasion, to highlight authors and their research. Today, we are shining a spotlight on this…
Peer Review Week research integrity webinar: Your questions answered
Peer Review Week has come and gone, but our work is not yet over. A few weeks ago, we organized a webinar around this year’s theme, “Research Integrity: Creating and supporting trust in research.” We had three esteemed panelists: Ivan Oransky, co-founder of Retraction Watch, Fiona Fox, Executive Director of the Science Media Center, UK, and Renee Hoch, Managing Editor of the PLOS Publication Ethics Team. There were more than 1,200 registrants and so many questions that we did not have time to address them all. We asked select members of PLOS staff to answer most of them in this follow-up blog. But before we get to the questions, here is the webinar if you’d like to watch it.
Retraction specific questions:
Q: Is a retraction still seen in the academic community as a potentially career-ending/negative event for an author? Is this view changing?
A: Renee Hoch, Managing Editor of the PLOS Publication Ethics Team: This is indeed still an issue, possibly in some countries more than others. We hope that the general view will change, and that those in positions to decide on sanctions, promotions, tenure decisions, and funding will consider case details rather than take a blanket view on retractions. Not all retractions are the same. There are cases of blatant and purposeful academic misconduct, which should have a negative effect on the perpetrator’s career, but even in these cases there may be co-authors who were uninvolved in and unaware of the misconduct. There are also cases that stem from misunderstandings about authorship or data ownership, reporting errors and data unavailability, or honest errors. In some of these cases, a retraction may be an indicator of a training gap. Punishing retractions uniformly, without considering context, may not get to the root of the issue, helps to perpetuate stigma around retractions, and may deter researchers from stepping forward when they become aware of major concerns. Authors who proactively reach out to request retraction when they discover an error with their paper should be applauded rather than punished. Is this view changing? Institutional and funding body representatives would be better placed than publishers to answer this, although publishers can do their part by providing clear information in retraction notices as to the reasons for retraction, and by notifying the relevant parties in cases where an institutional investigation may be warranted.
Q: Do you think questions/concerns raised on places like pubpeer should be anonymous and without any accompanying declaration of Conflicts of Interests?
A: Renee Hoch: It is helpful to give readers the option of listing their name or posting anonymously according to their personal preference. Anonymity can help readers to express concerns without fear of reprisal, particularly in cases where one raises concerns about work by a colleague, collaborator, or researcher who may contribute to one’s own career (e.g. via grant review). On the other hand, if a reader is comfortable providing their name – whether on a public site such as Pubpeer or in a personal communication to the journal/publisher – then they can obtain public credit for their observations (if they self-identify publicly), and the journal/publisher can follow up with them directly to ask for any clarifications and provide an update when the case is resolved. I am a proponent of transparently declaring potential conflicts of interest, but I appreciate that in some cases these interests may be identifying. At PLOS, we follow up on concerns about our content whether or not the person raising them has potential conflicts. Importantly, we focus our investigations on the issues, article content, and evidence (e.g. primary data), not on the person who raised the concerns.
Q: How much evidence do journals need to raise questions about scientific integrity of data?
A: Renee Hoch: We investigate every claim that calls into question the integrity of any paper we publish, but we need readers to provide specific details as to the nature of the concerns and which results are in question. In cases involving data integrity concerns, if a reader has already done an analysis or made an observation that supports their concerns, then providing details of the analysis and/or findings can greatly aid in our assessment.
Q: Is there a time limitation post publication to file a grievance with a Journal?
A:Renee Hoch: No. However, some universities and research institutions have time limits as to when they will investigate concerns, and this can present a barrier to journal-level work in some cases.
Q: Why do journals refuse the self-citation and consider it as plagiarism?
A: Renee Hoch: We do not refuse self-citation, i.e. citation of one’s previous works. However, re-using previously published content without citing the original source is misleading and in some cases may present copyright concerns. Also, authors should not preferentially cite their own works over others, rather they should discuss the relevant literature in a balanced manner that accurately represents the current state of the field, giving due attribution to all key contributors.
Q: Why not publish in an Appendix anonymized reviewer comments and author responses?
A: Renee Hoch: While the standard in the field has historically been to hold peer review as confidential, some publishers and other stakeholders have recently taken steps toward opening up peer review and providing transparency as to the article assessment process. Since 2019, PLOS has offered authors the option of publishing peer review history (review reports and decision letters) alongside accepted research articles. We use a separate tab on article pages for peer review history rather than an Appendix. This allows interested readers to navigate easily through the various documents. While we encourage this transparency with regard to article assessment, it is not mandatory. For additional discussion of transparent peer review, see here and here.
Q: Why not routinely provide the public with the possibility of evaluating scientific conclusions through links to the evidence on which conclusions are based?
A: Renee Hoch, Managing Editor of the PLOS Publication Ethics Team: This is one of the primary objectives of the open data movement which has made great strides over the past decade. PLOS has a Data Availability policy, which was updated in 2014 to require that authors make publicly available the minimal dataset underlying an article’s findings.
Q: Do you think the timeliness concerns are related to the workforce crisis in peer review?
A: Renee Hoch, Managing Editor of the PLOS Publication Ethics Team: I am answering this under the assumption that ‘timeliness concerns’ refer to concerns about the timeliness of research publication, i.e. the time required for submissions to undergo peer review process. There are several factors that contribute to this, including difficulties securing peer reviewers who are suitable, willing, and available to contribute to the process. This could be improved by having increasingly effective tools to help editors find suitable reviewers, and by having peer review contributions formally recognized or even required by tenure and promotion committees and hiring managers. Reviewers for PLOS have the option of linking peer review contributions via their ORCID profiles in cases where they are willing to self-identify as reviewers.
Q: What are some ways to encourage the inclusion of a Limitations section in every paper?
A:Renee Hoch, Managing Editor of the PLOS Publication Ethics Team: This could be encouraged through training and reinforcement. Institutions and established investigators could provide (or even require) research reporting training for early career researchers, and one aspect of this training could be to emphasize the importance of discussing limitations as a standard section within each research article. This training can then be reinforced through peer review: as part of their evaluation, reviewers and handling editors should consider whether a manuscript adequately addresses a study’s limitations and assumptions, and request revisions to add or improve upon this where needed. If researchers themselves have been trained to recognize the importance of this section, they will look for it in the articles they evaluate as reviewers. Journals could also include a section on Limitations in their submission guidelines, but ultimately it will be the handling editors and reviewers who evaluate the scientific content of each submission and provide feedback on this type of reporting issue.
Q: Do you feel that there needs to be more of an effort to educate/reinforce an understanding of the process of peer review to increase trust in results?
A: Renee Hoch, Managing Editor of the PLOS Publication Ethics Team: To increase trust in results, it may be more effective to focus resources on (a) improving adoption of open data practices, (b) emphasizing the importance of critically evaluating primary research literature, (c) encouraging reproducibility efforts and publication of both confirmatory and contradictory results, and (d) ensuring that news articles and other public discussions accurately summarize the findings without over-hyping or overstating the study’s results. Increasing awareness and understanding of peer review – including its strengths and limitations – may help by informing how the general audience approaches research reports. For example, this may help readers better appreciate the difference between peer-reviewed work that has been vetted by experts in the field and non-peer-reviewed articles (including pre-prints) that may not have been critically assessed before posting/publication.
Questions related to news and promotion:
Q: Do you think that journal press offices should proactively flag their (high-profile) retractions to journalists as well to ensure trust in science?
A: Dave Knutson, Sr. Manager, Communications: Yes, but with a caveat: Only if the journal publicized the paper when it was originally published. They have a responsibility to correct the public record. However, for journals like PLOS ONE, publicly calling out every retraction would end up being white noise to the public.
Q: On what basis do journalists decide what topics to cover or publicize?
A: Dave Knutson:: This may not seem satisfying, but it’s years of honing one’s news judgment. Journalists are constantly asking themselves: What does my audience need to know? And that question differs for every outlet. A reporter for the LA Times will answer that question differently than a journalist affiliated with the BBC. That said, here are some other considerations: Timeliness (is it happening now?), Progress (is it new?), Emotion (is there a human element?), Consequence (how many people will it affect/impact?), Proximity (is this a local story?), Conflict (are there differing opinions?) and is it unusual.
Q: What do you think about the role that university press offices play in disseminating hyped up science?
A: Beth Baker, Sr. Media Relations Manager: Press officers have a key role in communicating science accurately. Research (for example, this study) suggests that when press releases contain “hyping” language – unwarranted casual claims, explicit health advice, and so on – subsequent news coverage is more likely also to contain these exaggerations. Meanwhile, inclusion of appropriate caveats and limitations in press releases is linked with their increased inclusion in news coverage. Since the more cautious, accurate press releases are not associated with reduced news coverage, there’s no reason for press officers not to communicate with measured language and messages that we’re confident will stand up to scrutiny – and that’s certainly our aim at PLOS. Although university press officers in particular are under increasing pressure to “market” their institution and its research, I’ve also found all those I work with to be dedicated to prioritizing communication that accurately reflects the science – which is as it should be.
Q: Is there a difference between promoting preprints and promoting unpublished research presented at e.g. conferences?
A: Beth Baker, Sr. Media Relations Manager: Preprints and conference presentations are similar in that both may include research which has not (yet!) been peer-reviewed. Both are intended for an audience of researchers – for example, to share results early with colleagues – who are well-qualified to evaluate the science. At PLOS, we recommend against active promotion of either type of unpublished research, instead advising that authors and press officers wait until after peer review. The main difference between preprints and conference presentations in this respect is a practical one: while only conference attendees will see presentations, preprint servers are generally open to everyone, making it more likely that journalists will reach out to authors. We always recommend that if researchers choose to engage with such requests, they stress that their work is still undergoing peer review – and may be subject to change as a result.
Q: How can journal editors ensure that studies conducted in low resource contexts do not have author teams excluding local scientists? This erodes local trust.
A: Dave Knutson: We agree and last year announced this new policy regarding inclusion in global research. You can also read this editorial in PLOS Medicine.
Q: The effort to enhance access often doesn’t get acknowledged or rewarded in the promotion/tenure process. Is that a barrier to the work you’re trying to do?
A: Dan Morgan, Director of Communications and Community Relations: PLOS, and any publisher or organization committed to Open Science and Open Access would love to see Open publishing and research practices acknowledged and rewarded more. That would certainly be a key driver in an activity (scholarly communication) driven by acknowledgement and progress. However, we also believe authors choose Open practices as a matter of scientific integrity, and not just reward, especially when they are minor behavioral shifts with meaningful reputational and scientific benefits. If authors believe there is an Open Science practice that is discouraged, or an Open Science practice that is not worth the effort UNLESS it is rewarded, we would love to hear about it so we can shine a light on the best ways the Open Science community can help!
Q: What are your thoughts on single-blinded review processes? How can we ensure high quality and non conflicting peer-reviews?
A: Renee Hoch, Managing Editor of the PLOS Publication Ethics Team: PLOS uses a single-anonymized peer review model in which reviewers know authors’ identities by default, but reviewer identities remain hidden. We also offer reviewers the opportunity to sign their reviews if they wish to do so (about 18% choose to sign across all submissions and journals).
We’ve chosen a single-anonymized review as our default (over double-anonymized) for several reasons:
- Hiding author identities prevents the disclosure of COIs, and its effectiveness at limiting bias in peer review is questionable, since studies suggest reviewers can guess the identities of authors (or think they can)
- Research suggests that, in cases where double-anonymized peer review is optional, reviewers are more critical of authors who choose to conceal their identities–introducing perhaps a new form of bias
- Single-anonymized review with the option to sign allows reviewers to provide an honest critique without fear of retribution, while at the same time giving them the flexibility to share their identity when they feel comfortable doing so
We do not necessarily see conflicting or contradictory reviews as a bad thing. Indeed, gathering insight from different perspectives is one of the great benefits of having more than one expert reviewer. Such discrepancies can indicate that something in the article is open to interpretation and requires clarification, or they may result from differences in the reviewers’ specific expertise. Identifying and understanding these issues is valuable to the assessment and selection process. We ask our editors to take any differences in reviewer feedback into account during decision-making, and to indicate which changes they agree with and would expect to see in a revision.
We’d like to thank all of our panelists for sharing their expertise with us. It was a rich discussion, which led to a lot of thought provoking questions. We hope this debate continues far beyond Peer Review Week. Watch this space for updates on our collaboration with other publishers with regards to these issues as well as updates to our own policies.