[Purpose/significance] This paper aims to analyze the impact of different discipline classification schemes on field normalization effects, and compare the field normalization effects of different field normalization methods in different discipline classification schemes. [Method/process] This study focused on two aspects: first, compared the effect of the mean method、median method and Z-score method under the Web of Science classification scheme; second, changed the different discipline classification schemes to test the sensitivity of the three field normalization methods under the Essential Science Indicators (ESI) and Organization for Economic Co-operation and Development (OECD) classification schemes. [Result/conclusion] The results show that the disciplinary classification scheme does not have a significant impact on field normalization effects, and the effects of field normalization methods under different discipline classification schemes remain basically unchanged. Judging from the CCDF distribution graph, the citation distribution after using the three field normalization methods is obviously closer than the original citation count, and the citation distribution of the three field normalization methods after changing the discipline classification scheme with different granularity is still roughly the same. When the top z% method is used to numerically test the field normalization effect, it is found that the effects of the three field normalization methods remain basically unchanged after changing the different discipline classification schemes, and show the following laws: When intercepting papers below 30% of the global top, the Z-score method is slightly different than the mean method, but both are better than the median method; 30%-40% stage, Z-score method shows obvious advantages; more than 40%, the median method is significantly better than the other two methods.
[1] 张志辉,程莹,刘念才. 线性学科标准化方法的效果优化及其对科研评价结果的影响——以39所"985工程"大学论文质量排名为例[J].情报学报, 2015,34(3):300-312.
[2] RADICCHI F, FORTUNATO S, CASTELLANO C. Universality of citation distributions:toward an objective measure of scientific impact[J]. Proceedings of the National Academy of Sciences of the United States of America, 2008,105(45):17268-17272.
[3] WALTMAN L, VAN ECK N J, VAN LEEUWEN T N, et al. Towards a new crown indicator:an empirical analysis[J]. Scientometrics, 2011,87(3):467-481.
[4] WALTMAN L, VAN ECK N J, VAN LEEUWEN T N, et al. Towards a new crown indicator:some theoretical considerations[J]. Journal of informetrics, 2011,5(1):37-47.
[5] WANG X, ZHANG Z. Improving the reliability of short-term citation impact indicators by taking into account the correlation between short- and long-term citation impact[J]. Journal of informetrics, 2020,14(2):101019.
[6] LEYDESDORFF L, OPTHOF T. Remaining problems with "New Crown Indicator" (MNCS) of the CWTS[J]. Journal of informetrics, 2011,5(1):224-225.
[7] VACCARIO G, MEDO M, WIDER N, et al. Quantifying and suppressing ranking bias in a large citation network[J]. Journal of informetrics, 2017,11(3):766-782.
[8] BORNMANN L. How to analyze percentile impact data meaningfully in bibliometrics:the statistical analysis of distributions, percentile rank classes and top-cited papers[J]. Journal of the American Society for Information Science and Technology, 2013,64(3):587-595.
[9] LEYDESDORFF L, OPTHOF T. Normalization at the field level:fractional counting of citations[J]. Journal of informetrics, 2010,4(4):644-646.
[10] ZITT M, SMALL H. Modifying the journal impact factor by fractional citation weighting:the audience factor[J]. Journal of the American Society for Information Science and Technology, 2008,59(11):1856-1860.
[11] RADICCHI F, CASTELLANO C. A reverse engineering approach to the suppression of citation biases reveals universal properties of citation distributions[J]. PLOS ONE, 2012,7(3):e33833.
[12] CRESPO J A, LI Y, RUIZ-CASTILLO J. The measurement of the effect on citation inequality of differences in citation practices across scientific fields[J]. PLOS ONE, 2013,8(3):e58727.
[13] BORMANN L. Towards an ideal method of measuring research performance:some comments to the Opthof and Leydesdoff (2010) paper[J]. Journal of informetrics, 2010,4(3):441-443.
[14] BORMANN L, MUTZ R. Further steps towards an ideal method of measuring citation performance:the avoidance of citation (ratio) averages in field-normalization[J]. Journal of informetrics, 2011,5(1):228-230.
[15] CALVER M C, BRADLEY J S. Should we use the mean citations per paper to summarise a journal's impact or to rank journals in the same field?[J]. Scientometrics, 2009,81(3):611-615.
[16] BORNMANN L, MUTZ R, NEUHAUS C, et al. Citation counts for research evaluation:standards of good practice for analyzing bibliometric data and presenting and interpreting results[J]. Ethics in science and environmental politics, 2008,8(1):93-102.
[17] 张志辉.论文影响力的线性学科标准化方法研究[D].上海:上海交通大学,2015.
[18] 陈仕吉,史丽文,左文革.科研机构潜势学科的识别方法与实证分析——以中国农业大学为例[J].情报杂志,2012,31(2):43-47.
[19] VAN RAAN A F J.The use of bibliometric analysis in research performance assessment and monitoring of interdisciplinary scientific developments[J]. Assessment theory and practice,2003,1(12):20-29.
[20] 王颖鑫,黄德龙,刘德洪.ESI指标原理及计算[J].图书情报工作, 2006,50(9):73-75.
[21] HU Z, TIAN W, XU S, et al. Four pitfalls in normalizing citation indicators:an investigation of ESI's selection of highly cited papers[J]. Journal of informetrics, 2018,12(4):1133-1145.
[22] LEYDESDORFF L, BORNMANN L. The operationalization of "fifields" as WoS subject categories (WCs) in evaluative bibliometrics:the cases of "library and information science" and "science & technology studies"[J]. Journal of the Association for Information Science and Technology, 2016,67(3):707-714.
[23] RUIZ-CASTILLO J, WALTMAN L. Field-normalized citation impact indicators using algorithmically constructed classification systems of science[J]. Journal of informetrics, 2015,9(1):102-117.
[24] COLLIANDER C.A novel approach to citation normalization:a similarity-based method for creating reference sets[J]. Journal of the Association for Information Science and Technology, 2015,66(3):489-500.
[25] COLLIANDER C, AHLGREN P. Comparison of publication-level approaches to ex-post citation normalization[J]. Scientometrics, 2019,120(1):283-300.
[26] ZITT M, RAMANANA-RAHARY S, BASSECOULARD E. Relativity of citation performance and excellence measures:from cross-field to cross-scale effects of field-normalisation[J]. Scientometrics, 2005,63(2):373-401.
[27] ADAMS J, GURNEY K, JACKSON L. Calibrating the zoom-a test of Zitt's hypothesis[J]. Scientometrics, 2008,75(1):81-95.
[28] GLÄNZEL W, THIJS B, SCHUBERT A, et al. Subfield-specific normalized relative indicators and a new generation of relational charts:methodological foundations illustrated on the assessment of institutional research performance[J]. Scientometrics, 2009,78(1):165-188.
[29] PERIANES-RODRIGUEZ A, RUIZ-CASTILLO J. A comparison of the Web of Science and publication-level classification systems of science[J]. Journal of informetrics, 2017,11(1):32-45.
[30] BAR-ILAN J. Which h-index?-a comparison of WoS, Scopus and Google Scholar[J]. Scientometrics, 2013,74(2):257-271.
[31] WANG J. Citation time window choice for research impact evaluation[J]. Scientometrics, 2013,94(3):851-872.
[32] BORNMANN L, DANIEL H D. Universality of citation distributions-a validation of Radicchi et al.'s relative indicator cf=c/c0 at the micro level using data from chemistry[J]. Journal of the American Society for Information Science and Technology, 2009,60(8):1664-1670.
[33] RADICCHI F, CASTELLANO C. Rescaling citations of publications in physics[J]. Pysical review e, 2011,83(4):046116.
[34] WALTMAN L, VAN ECK N J, VAN RAAN A F J. Universality of citation distributions revisited[J]. Journal of the American Society for Information Science and Technology, 2012,63(1):72-77.
[35] ZHANG Z, CHENG Y, LIU N C. Comparison of the effect of mean-based method and z-score for field normalization of citations at the level of Web of Science subject categories[J]. Scientometrics, 2014,101(3):1679-1693.
[36] ZHANG Z, CHENG Y, LIU N C. Improving the normalization effect of mean-based method from the perspective of optimization:optimization-based linear methods and their performance[J]. Scientometrics, 2015,102(1):587-607.