Skip to content

When you choose to publish with PLOS, your research makes an impact. Make your work accessible to all, without restrictions, and accelerate scientific discovery with options like preprints and published peer review that make your work more Open.

PLOS BLOGS The Official PLOS Blog

Publish-or-Perish in Context: New research in publication rates says publish more, but is that the best answer for science?

If you’ve worked around academics you are surely familiar with the phrase publish-or-perish, the idea that publication rate, above all else, is the key to advancement as an academic. Plenty of ink has certainly been spilled over whether we should be evaluating researchers with such an emphasis on publication rate; whether these publications should be scored using a variety of bibliometric indicators of quality; or whether expert opinion should reign supreme in evaluating the quality of someone’s publication history. There are no easy answers to this question, but many research institutions have already included some sort of bibliometric index into evaluations, and even in those that don’t the publication rate requirements seem to forever edge upwards.

 

A paper published today in PLOS One asks the question, “How Many is Too Many?” research publications? Does research productivity simply spread good science thin amongst a raft of tiny, unconnected studies or, does a high publication rate mean a greater chance that one of those papers becoming a classic in your field? Perhaps the effect of publication rate is more like a snowball, creating more citations as a function of the number of publications? The new paper by Larivière & Costas uses a dataset from Web of Science of over 28 million publications between 1980 and 2013 to investigate these questions and comes to some interesting conclusions…though for me it opens more questions than it answers.

The conclusion Larivière and Costas come to is that increased productivity does indeed increase the proportion of papers that are highly cited. So, more papers means a better chance of your research being considered important in the field. Looking a bit deeper though, I wonder, if there is more to the story.

They compare the productivity of researchers to their share of the top 1% most cited papers in their broadly defined field, then further examine this measure of impact based on career length and the time-periods during which work was done. Authorship was defined, as far as I could tell, by being listed as an author regardless of the rank of authors. So, in this analysis first authorship was not counted any higher than being the 10th author on a large review manuscript. As an ecologist this method seems problematic for a couple of reasons.

Figure from Larivière & Costas 2016: Proportion of top 1% most cited papers (y axis), as a function of the number of papers published (x axis), for the cohort of researchers who have published their career number between 1981 and 1985, by domain.
Figure from Larivière & Costas 2016: Proportion of top 1% most cited papers (y axis), as a function of the number of papers published (x axis), for the cohort of researchers who have published their career number between 1981 and 1985, by domain.

First, in my experience, tenure and promotion does take into account co-authored papers but they hold significantly less weight than first-authored papers. Following closely behind authorship is the perceived rank of the journal in which it was published. Larivière and Costas’ analysis ignores both author rank and journal rank. This omission would, I think, tend to skew the results toward researchers whose work is highly collaborative, lab managers who are included as a last author by convention, or individuals who have a methodology for which they end up on many papers despite little input on the scientific objectives.

 

 

In other words, this analysis over-emphasizes those researchers who, by dint of what they study or what position they hold, have the opportunity to get a large number of co-authorships

For some perspective, the figures included in the paper show the publication output over a five-year span. During that time researchers at the top of the scale produce 30 to 40 papers per year! Clearly not all of these publications are first-authored. That kind of output, even without first-authorship, is astounding and frankly, it is difficult to imagine how much scientific input one could provide with that many manuscripts passing your desk in a year. Given the time spent on teaching, research and service at different Universities this workload is exceptional even among research-only professors at R1 universities.

Figure from Larivière & Costas, 2016: Proportion of top 1% most cited papers, as a function of their number of papers published, for the cohort of researchers who have published their first paper between 2009 and 2013, by domain.
Figure from Larivière & Costas, 2016: Proportion of top 1% most cited papers, as a function of their number of papers published, for the cohort of researchers who have published their first paper between 2009 and 2013, by domain.

Interestingly, the paper seems to indicate that being a young scientist in the Natural Sciences right now might be a great time to make a splash with your research. Young researchers with 20 publications under their belt, who published their first paper between 2009 and 2013, had 3% of their papers in the most cited papers on Web of Science. In contrast, researchers publishing between 1981 and 1985 had to publish 200 papers to reach half that level of impact. But…keep in mind that this doesn’t control for decreases in scientific output with age, ageism in the sciences, and potentially the fact that younger researchers during their M.S. and Ph.D. can focus entirely on research without the other responsibilities of teaching and service.

 

The authors come to the conclusion that raw output of manuscripts does indeed increase the chances of being heavily cited in your field. To me though, even ignoring my critiques above, these conclusions open more questions than answers.

 

Research funding continues to fall, while pressure to publish (and by extension bring in funding) increases as universities scrape for every scrap of available funding. For many R1 and R2 universities in the United States this has fueled a relentless drive toward increasing publication and grantsmanship, while universities are simultaneously increasing enrollment to increase tuition revenue…all while insisting on more faculty focus on research. But this creates competing goals for professors. On one hand, more students means more teaching, and universities increasingly are treating students as customers explicitly. On the other, publication and grant making are increasingly more important sources of revenue as public funding declines. There has always been a tension between the teaching and research roles, but as funding falters professors are at the epicenter of this tension. Universities seem to be solving this problem by hiring adjuncts for the teaching and increasing the expectations for publication.

 

At the same time fields such as psychology and medicine are facing a very public replication crisis, and although the problems are less public, ecology has similar and potentially thornier replication problems. Despite this, research funds are awarded to researchers who publish cutting-edge, boundary breaking science and not to studies that ensure that these eye-popping studies are actually true.

 

Neither the tension over research quantity felt by professors, the need for replicability of results, or the pressure to create ground-breaking research, is reflected in the analysis by Larivière and Costas. I think we would be remiss if these issues weren’t included when we interpret their results, however. In the absence of this context Larivière and Costas are correct that increased authorship increases the probability that a researcher will be highly cited. But is production at all cost the incentive we want to project onto science? Does a shotgun approach that cuts larger and more comprehensive projects into tiny, publishable, snippets the best way to increase our understanding, or simply an expedient way to advance in one’s career? Some have argued that this approach rewards grantsmanship but devalues research that doesn’t cost as much, that the elegantly designed experiments of the past would never get funded now because they weren’t financially ambitious enough.

 

Is the tradeoff of adjunct instructors, and the loss of direct student involvement with researchers that comes with it, worth the increased publication rates? Presumably this direct access to the field is part of what generates graduate school recruits. Is increasing research productivity tenable if there is no support for replicating the results? It can be argued that this is, in effect, replacing the careful and considered act of science with a quest for new and shocking results at all costs.

 

These are questions that need to be grappled with, but I’m unconvinced that mathematical metrics are going to solve them. Certainly bibliographic metrics have their place, and new methods that purport to be less biased may improve their usefulness, but the core issues here are cultural. Scientists and the academic community must find a balance that allows realistic research productivity, who’s output is replicable. We also must allow professors to complete their dual mission of research and teaching without throwing the balance completely toward research, that is if we still believe that the university model of teaching, mentorship, and research have value.

Back to top