情报研究

不同学科分类方案下不同学科标准化方法效果的比较研究

  • 任元秋 ,
  • 王兴 ,
  • 郑钦钦
展开
  • 1. 山西财经大学信息学院 太原 030006;
    2. 优赛思管理系统应用咨询有限公司 上海 200120
任元秋(ORCID:0000-0002-5789-0361),硕士研究生;郑钦钦(ORCID:0000-0002-9022-0589),数据分析师。

收稿日期: 2020-07-28

  修回日期: 2020-10-25

  网络出版日期: 2021-02-05

基金资助

本文系教育部人文社会科学研究青年基金项目"世界一流大学国际学术话语权研究:国际学术期刊编委的视角"(项目编号:17YJCZH179)研究成果之一。

Comparison of Field Normalization Effects Based on Different Discipline Classification Schemes

  • Ren Yuanqiu ,
  • Wang Xing ,
  • Zheng Qinqin
Expand
  • 1 School of Information, Shanxi University of Finance&Economics, Taiyuan 030006;
    2 Urban Science, Shanghai 200120

Received date: 2020-07-28

  Revised date: 2020-10-25

  Online published: 2021-02-05

摘要

[目的/意义] 探究不同学科分类方案对于学科标准化方法效果的影响,并比较不同学科标准化方法的效果。[方法/过程] 在Web of Science学科分类方案下就比均值法、比中位数法、Z-score法这三种常用的标准化方法的效果进行比较研究;变更不同粒度的学科分类方案,对这三种标准化方法在Essential Science Indicators (ESI)、经济合作与发展组织(OECD)学科分类方案下的敏感性进行实证检验。[结果/结论] 结果显示,使用不同学科分类方案并未对各标准化方法的效果产生较大影响,各标准化方法的效果基本保持不变。从CCDF引文分布曲线的图形上来看,使用三种标准化方法处理后的CCDF曲线形状较原始引文的CCDF曲线形状明显更加聚拢,并且三种标准化方法在更换不同粒度的学科分类方案后引文分布情况仍大致相同。结合top z%法从定量数值的角度再次进行检验,可以发现,三种标准化方法的效果在变更不同粒度的学科分类方案后基本保持不变,并呈现出如下规律:在截取全局top30%以下论文时,比均值法、Z-score法的标准化效果虽然略有不同,但是都优于比中位数法;截取top30%-40%阶段论文时,Z-score法的优势较为突出;截取top40%以上论文时,比中位数法则呈现出明显优于其他两者的效果。

本文引用格式

任元秋 , 王兴 , 郑钦钦 . 不同学科分类方案下不同学科标准化方法效果的比较研究[J]. 图书情报工作, 2021 , 65(3) : 84 -92 . DOI: 10.13266/j.issn.0252-3116.2021.03.011

Abstract

[Purpose/significance] This paper aims to analyze the impact of different discipline classification schemes on field normalization effects, and compare the field normalization effects of different field normalization methods in different discipline classification schemes. [Method/process] This study focused on two aspects: first, compared the effect of the mean method、median method and Z-score method under the Web of Science classification scheme; second, changed the different discipline classification schemes to test the sensitivity of the three field normalization methods under the Essential Science Indicators (ESI) and Organization for Economic Co-operation and Development (OECD) classification schemes. [Result/conclusion] The results show that the disciplinary classification scheme does not have a significant impact on field normalization effects, and the effects of field normalization methods under different discipline classification schemes remain basically unchanged. Judging from the CCDF distribution graph, the citation distribution after using the three field normalization methods is obviously closer than the original citation count, and the citation distribution of the three field normalization methods after changing the discipline classification scheme with different granularity is still roughly the same. When the top z% method is used to numerically test the field normalization effect, it is found that the effects of the three field normalization methods remain basically unchanged after changing the different discipline classification schemes, and show the following laws: When intercepting papers below 30% of the global top, the Z-score method is slightly different than the mean method, but both are better than the median method; 30%-40% stage, Z-score method shows obvious advantages; more than 40%, the median method is significantly better than the other two methods.

参考文献

[1] 张志辉,程莹,刘念才. 线性学科标准化方法的效果优化及其对科研评价结果的影响——以39所"985工程"大学论文质量排名为例[J].情报学报, 2015,34(3):300-312.
[2] RADICCHI F, FORTUNATO S, CASTELLANO C. Universality of citation distributions:toward an objective measure of scientific impact[J]. Proceedings of the National Academy of Sciences of the United States of America, 2008,105(45):17268-17272.
[3] WALTMAN L, VAN ECK N J, VAN LEEUWEN T N, et al. Towards a new crown indicator:an empirical analysis[J]. Scientometrics, 2011,87(3):467-481.
[4] WALTMAN L, VAN ECK N J, VAN LEEUWEN T N, et al. Towards a new crown indicator:some theoretical considerations[J]. Journal of informetrics, 2011,5(1):37-47.
[5] WANG X, ZHANG Z. Improving the reliability of short-term citation impact indicators by taking into account the correlation between short- and long-term citation impact[J]. Journal of informetrics, 2020,14(2):101019.
[6] LEYDESDORFF L, OPTHOF T. Remaining problems with "New Crown Indicator" (MNCS) of the CWTS[J]. Journal of informetrics, 2011,5(1):224-225.
[7] VACCARIO G, MEDO M, WIDER N, et al. Quantifying and suppressing ranking bias in a large citation network[J]. Journal of informetrics, 2017,11(3):766-782.
[8] BORNMANN L. How to analyze percentile impact data meaningfully in bibliometrics:the statistical analysis of distributions, percentile rank classes and top-cited papers[J]. Journal of the American Society for Information Science and Technology, 2013,64(3):587-595.
[9] LEYDESDORFF L, OPTHOF T. Normalization at the field level:fractional counting of citations[J]. Journal of informetrics, 2010,4(4):644-646.
[10] ZITT M, SMALL H. Modifying the journal impact factor by fractional citation weighting:the audience factor[J]. Journal of the American Society for Information Science and Technology, 2008,59(11):1856-1860.
[11] RADICCHI F, CASTELLANO C. A reverse engineering approach to the suppression of citation biases reveals universal properties of citation distributions[J]. PLOS ONE, 2012,7(3):e33833.
[12] CRESPO J A, LI Y, RUIZ-CASTILLO J. The measurement of the effect on citation inequality of differences in citation practices across scientific fields[J]. PLOS ONE, 2013,8(3):e58727.
[13] BORMANN L. Towards an ideal method of measuring research performance:some comments to the Opthof and Leydesdoff (2010) paper[J]. Journal of informetrics, 2010,4(3):441-443.
[14] BORMANN L, MUTZ R. Further steps towards an ideal method of measuring citation performance:the avoidance of citation (ratio) averages in field-normalization[J]. Journal of informetrics, 2011,5(1):228-230.
[15] CALVER M C, BRADLEY J S. Should we use the mean citations per paper to summarise a journal's impact or to rank journals in the same field?[J]. Scientometrics, 2009,81(3):611-615.
[16] BORNMANN L, MUTZ R, NEUHAUS C, et al. Citation counts for research evaluation:standards of good practice for analyzing bibliometric data and presenting and interpreting results[J]. Ethics in science and environmental politics, 2008,8(1):93-102.
[17] 张志辉.论文影响力的线性学科标准化方法研究[D].上海:上海交通大学,2015.
[18] 陈仕吉,史丽文,左文革.科研机构潜势学科的识别方法与实证分析——以中国农业大学为例[J].情报杂志,2012,31(2):43-47.
[19] VAN RAAN A F J.The use of bibliometric analysis in research performance assessment and monitoring of interdisciplinary scientific developments[J]. Assessment theory and practice,2003,1(12):20-29.
[20] 王颖鑫,黄德龙,刘德洪.ESI指标原理及计算[J].图书情报工作, 2006,50(9):73-75.
[21] HU Z, TIAN W, XU S, et al. Four pitfalls in normalizing citation indicators:an investigation of ESI's selection of highly cited papers[J]. Journal of informetrics, 2018,12(4):1133-1145.
[22] LEYDESDORFF L, BORNMANN L. The operationalization of "fifields" as WoS subject categories (WCs) in evaluative bibliometrics:the cases of "library and information science" and "science & technology studies"[J]. Journal of the Association for Information Science and Technology, 2016,67(3):707-714.
[23] RUIZ-CASTILLO J, WALTMAN L. Field-normalized citation impact indicators using algorithmically constructed classification systems of science[J]. Journal of informetrics, 2015,9(1):102-117.
[24] COLLIANDER C.A novel approach to citation normalization:a similarity-based method for creating reference sets[J]. Journal of the Association for Information Science and Technology, 2015,66(3):489-500.
[25] COLLIANDER C, AHLGREN P. Comparison of publication-level approaches to ex-post citation normalization[J]. Scientometrics, 2019,120(1):283-300.
[26] ZITT M, RAMANANA-RAHARY S, BASSECOULARD E. Relativity of citation performance and excellence measures:from cross-field to cross-scale effects of field-normalisation[J]. Scientometrics, 2005,63(2):373-401.
[27] ADAMS J, GURNEY K, JACKSON L. Calibrating the zoom-a test of Zitt's hypothesis[J]. Scientometrics, 2008,75(1):81-95.
[28] GLÄNZEL W, THIJS B, SCHUBERT A, et al. Subfield-specific normalized relative indicators and a new generation of relational charts:methodological foundations illustrated on the assessment of institutional research performance[J]. Scientometrics, 2009,78(1):165-188.
[29] PERIANES-RODRIGUEZ A, RUIZ-CASTILLO J. A comparison of the Web of Science and publication-level classification systems of science[J]. Journal of informetrics, 2017,11(1):32-45.
[30] BAR-ILAN J. Which h-index?-a comparison of WoS, Scopus and Google Scholar[J]. Scientometrics, 2013,74(2):257-271.
[31] WANG J. Citation time window choice for research impact evaluation[J]. Scientometrics, 2013,94(3):851-872.
[32] BORNMANN L, DANIEL H D. Universality of citation distributions-a validation of Radicchi et al.'s relative indicator cf=c/c0 at the micro level using data from chemistry[J]. Journal of the American Society for Information Science and Technology, 2009,60(8):1664-1670.
[33] RADICCHI F, CASTELLANO C. Rescaling citations of publications in physics[J]. Pysical review e, 2011,83(4):046116.
[34] WALTMAN L, VAN ECK N J, VAN RAAN A F J. Universality of citation distributions revisited[J]. Journal of the American Society for Information Science and Technology, 2012,63(1):72-77.
[35] ZHANG Z, CHENG Y, LIU N C. Comparison of the effect of mean-based method and z-score for field normalization of citations at the level of Web of Science subject categories[J]. Scientometrics, 2014,101(3):1679-1693.
[36] ZHANG Z, CHENG Y, LIU N C. Improving the normalization effect of mean-based method from the perspective of optimization:optimization-based linear methods and their performance[J]. Scientometrics, 2015,102(1):587-607.
文章导航

/