An Empirical Study of Scientific Evaluation Based on the Correlation Between CNCI and Peer Review

  • Song Liping ,
  • Wang Jianfang
Expand
  • 1. Management School of Tianjin Normal University, Tianjin 300387;
    2. Institutes of Science and Development, Chinese Academy of Sciences, Beijing 100190

Received date: 2017-11-11

  Revised date: 2018-06-26

  Online published: 2018-09-20

Abstract

[Purpose/significance] In order to make some recommendations for responsible metrics and peer review, the paper analyses the feasibility of Category Normalized Citation Impact(CNCI) and its correlation with peer review.[Method/process] By choosing F1000 and InCites, this paper conducted the correlation analysis between CNCI and citation counts of 29,850 cell biology papers and 30,326 biotechnology papers, and performed the test of spearman rank correlation coefficient between the CNCI and F1000 scores of 956 cell biology papers.[Result/conclusion] The results show that CNCI is positively correlated with citation counts, and significantly correlated with F1000 rating, but there are contradictory situations for the two indicators at the same time. Therefore, CNCI can reflect the results of peer review, substitute for the indication of academic influence, and also can be used for comparision among disciplines. However, peer review or CNCI will be biased for scientific evaluation purpose alone. Then, tools that link responsible metrics indicators, such as CNCI and informed peer reviews would be valuable additions to the field of research evaluation.

Cite this article

Song Liping , Wang Jianfang . An Empirical Study of Scientific Evaluation Based on the Correlation Between CNCI and Peer Review[J]. Library and Information Service, 2018 , 62(18) : 122 -128 . DOI: 10.13266/j.issn.0252-3116.2018.18.013

References

[1] WILSDON J. The metric tide:report of the independent review of the role of metrics in research assessment and management[M]. London:SAGE Publishing,2016:137-158.
[2] GARFIELD E. Citation indexing-its theory and application in science, technology, and humanities[M].New York:John Wiley & Sons,1979.
[3] HICKS D, WOUTERS P, WALTMAN L. Bibliometrics:the Leiden Manifesto for Research metrics[J]. Nature. 2015,520(7548):429-431.
[4] HAUNSCHILD R, BORNMANN L. Normalization of Mendeley reader counts for impact assessment[J]. Journal of informetrics, 2016, 10(1):62-73.
[5] TRACZ V, LAWRENCE R. Towards an open science publishing platform[EB/OL].[2018-07-20]. https://f1000researchdata.s3.amazonaws.com/manuscripts/8575/0c0c20d6-8920-4d1c-b564-e81901b268f4_7968_-_rebecca_lawrence.pdf?doi=10.12688/f1000research.7968.1.
[6] InCites[EB/OL].[2018-04-17].https://clarivate.com.cn/products/incites/.
[7] LEYDESDORFF L, RADICCHI F, BORNMANN L, et al. Field-normalized impact factors (IFs):A comparison of rescaling and fractionally counted IFs[J]. Journal of the American Society for Information Science & Technology, 2013, 64(11):2299-2309.
[8] BORNMANN L, HAUNSCHILD R. Measuring field-normalized impact of papers on specific societal groups:an altmetrics study based on Mendeley data[J]. Research evaluation, 2016, 26(3):10-35.
[9] InCites[EB/OL].[2017-06-03].https://incites.thomsonreuters.com/.
[10] F1000[EB/OL].[2017-06-05].http://f1000.com/prime.
[11] BORNMANN L, HAUNSCHILD R. Relative Citation Ratio (RCR):an empirical attempt to study a new field-normalized bibliometric indicator[J]. Journal of the Association for Information Science & Technology, 2017, 68(4):1064-1067.
[12] DERRICK G E, PAVONE V. Democratising research evaluation:achieving greater public engagement with bibliometrics-informed peer review[J]. Science & public policy, 2013, 40(5):563-575.
Outlines

/