Is Biodiversity Research Biased?
In 2012, the Convention on Biological Diversity (CBD), a comprehensive, multi-lateral treaty signed by 193 nations, established 20 biodiversity targets to coincide with the beginning of the UN Decade of Biodiversity. The goals of the collected targets were to address underlying drivers of biodiversity losses, to help safe-guard ecosystems, reduce direct pressures on biodiversity, preserve species and genetic diversity, enhance benefits from biodiversity and ecosystem services, and to enhance the implementation of conservation measures world-wide through capacity building. However, concern has been raised that we are in danger of failing to reach these goals due to research biased away from the most biodiverse nations. In a recent PLoS Biology article, “Conservation Research is Not Happening Where It Is Most Needed,” Kerrie Wilson and her co-authors show a discrepancy in the published literature biased towards more developed nations such as the US over countries of graver concern. They make the case that by failing to conduct adequate research in the most biodiverse areas of the globe, we are threatening our capacity to manage and preserve our natural ecosystems.
Wilson and her colleagues focus on one of the CBD’s Aichi Biodiversity targets, Target 19, which reads:
“By 2020, knowledge, the science base and technologies relating to biodiversity, its values, functioning, status and trends, and the consequences of its loss, are improved, widely shared and transferred, and applied.”
The goal of this target is to increase not only the amount of knowledge about biodiversity, but the quality of that information as well. For policy-makers to make informed decisions, they must have a sound understanding of the extent of services provided by highly biodiverse ecosystems and the pressures those ecosystems are facing. It is also important that the information be available. Stakeholders often want to make initiatives to preserve biodiversity, identify threats, and build capacity to conduct monitoring and research, but encounter problems either accessing information, or the complete absence of necessary information, limiting their abilities. Target 19 also calls for increased resources in research, particularly in observer networks, but also in taxonomy, modelling, and participatory research.
Wilson and her co-authors conducted “. . . the first comprehensive analysis of publishing trends of the conservation science literature” by identifying all the publications from 2014 that covered the topic of “conservation.” Their search focused on the areas of ecology, environmental sciences, geography, plant sciences, and zoology. A search of the Thomson Reuters Zoological Records and Web of Science Core Collection databases yielded a whopping 10,036 articles from over 1,000 journals.
Alarmingly, the countries least represented, were the countries where this information was most needed. If we look at the issue of mammal conservation, the top five countries of concern are: Indonesia, Madagascar, Peru, Mexico, and Australia. By Wilson and her colleague’s findings, these countries comprise only about 12% of the publications from 2014, but should, by their relative importance to conservation and biodiversity, be represented in 37% of studies.
While the United States should be represented in less than 1% of studies of mammal conservation, instead, it was represented in nearly 18%—the most of any country.
If we look at the representation of vascular plants, and endemic and functional species, Ecuador, Costa Rica, Panama, the Dominican Republic, and Papua New Guinea are the top five countries of concern. Collectively, these countries are represented in less than 2% of published studies. It would be expected, again using relative importance, that these countries would be represented in about 8% of studies.
While the generation of research in these areas of most concern is an obvious issue, research accessibility is also a problem. Of the studies focusing on the ten countries of greatest conservation concern outlined above, fewer than 12% were published in open-access journals. Publication in open-access journals increases the visibility and accessibility of research. Traditional journals often require subscriptions, resulting in paywalls that limit research accessibility—particularly to policy makers and researchers in developing nations. However, there are hefty fees associated with publishing findings in open-access journals. Wilson and her co-authors call on scientific societies and publishing houses “. . . to openly commit to waiving fees for research from historically underrepresented countries, particularly those where local in-country scientists and institutions have played a significant role in the research.”
Communicating research findings is also a key strategy to decreasing the disparity between nations. Using Altmetrics scores, Wison and colleagues found substantial differences between the scores of research generated in the US and that of other countries. Altmetric scores attempt to measure the impact of research based on the attention of social media outlets and serve as an analog for impact. These differences in scores between countries indicate that what research is being produced, is further hindered by a lack of media coverage. This should serve as a clarion call to increase the coverage of research in less represented, yet more biodiverse countries.
To reach the outlined CBD goals by 2020, there is a need to increase research infrastructure in less represented areas of the globe. Our ability to meet conservation goals will be compromised if we continue to under-represent and ignore many of our most biodiverse areas. However, to address the bias in the published literature will require substantial investment from funding sources, governments, the private sector, and researchers.
We agree with most of the conclusions presented by Kerrie A. Wilson et al. (2016), in particular that a biased spatial distribution of case studies poses a major problem of meta-analytical approaches in biodiversity conservation. However, we think that mapping methods used in the paper may lead to misinterpretations of spatial patterns (or densities) of case study research analysed by Wilson et al. We propose therefore two alternative ways how the publication data gathered by Wilson et al. (2016) can be mapped, leading to different conclusion than those of the authors.
Our full comment is available below the paper – http://journals.staging.plos.org/plosbiology/article/comment?id=info%3Adoi%2F10.1371%2Fannotation%2F3615501c-1ecd-4542-9119-5c0a7efb3cef and also in pdf form at http://bit.ly/1SY8iDO