[1] 杨思洛, 聂颖.结合全文本分析的论文影响力评价模型研究[J].现代情报, 2022, 42(3):133-146. [2] 陆伟, 李鹏程, 张国标, 等.学术文本词汇功能识别——基于 BERT向量化表示的关键词自动分类研究[J].情报学报, 2020, 39(12):1320-1329. [3] 王禹, 吴云.基于评论细粒度观点的跨域推荐模型[J/OL].[2022-03-09].http://epub1.gou5juan.com/kcms/detail/11.2127.TP.20220304.1425.005.html. [4] HATZIVASSILOGLOU V, MCKEOWN K.Predicting the semantic orientation of adjectives[EB/OL].[2022-08-20].https://dl.acm.org/doi/10.3115/976909.979640. [5] 王颖.学术资源挖掘方法研究综述[J].现代情报, 2021, 41(12):164-177. [6] PANG B, LEE L.Opinion mining and sentiment analysis[J].Foundations and trends in information retrieval, 2008, 2(1/2):1-135. [7] 陈旻, 朱凡微, 吴明晖, 等.观点挖掘综述[J].浙江大学学报(工学版), 2014, 48(8):1461-1472. [8] 赵泽青.网络评论观点挖掘综述[J].现代计算机(专业版), 2019(7):49-53. [9] 袁博.观点挖掘模型的研究与改进[D].上海:上海交通大学, 2017. [10] 韩忠明, 李梦琪, 刘雯, 等.网络评论方面级观点挖掘方法研究综述[J].软件学报, 2018(2):417-441. [11] 温浩, 乔晓东.文摘创新点的语义本体模型研究[J].情报学报, 2017, 36(9):964-971. [12] 温浩, 何茜茹.学术文摘创新点挖掘的认知分析方法[J].情报学报, 2021, 40(5):489-499. [13] LECUN Y, BENGIO Y, HINTON G.Deep learning[J].Nature, 2015, 521(7553):436-444. [14] 常军林, 吴笑伟, 吴芬芬, 等.基于特征和隐马尔可夫模型的文本信息抽取[J].河南科技大学学报(自然科学版), 2008(2):55-57, 70, 110-111. [15] 王晓, 李纲, 毛进, 等.突发事件舆情观点识别与分析研究评述[J].图书情报知识, 2021(1):93-102. [16] 周星瀚, 刘宇, 邱秀连.基于深度学习和CRF的新闻文章的观点提取[J].电子设计工程, 2020, 28(3):18-22. [17] 韩嵩, 韩秋弘.半监督学习研究的述评[J].计算机工程与应用, 2020, 56(6):19-27. [18] 谭春辉, 熊梦媛.基于LDA模型的国内外数据挖掘研究热点主题演化对比分析[J].情报科学, 2021, 39(4):174-185. [19] 张柳, 王晰巍, 黄博, 等.基于LDA模型的新冠肺炎疫情微博用户主题聚类图谱及主题传播路径研究[J].情报学报, 2021, 40(3):234-244. [20] 姚兆旭, 马静.面向微博话题的"主题+ 观点"词条抽取算法研究[J].现代图书情报技术, 2016(Z1):78-86. [21] CARON M, BOJANOWSKI P, JOULIN A, et al.Deep clustering for unsupervised learning of visual features[C]//Proceedings of the European conference on computer vision.Munich:ECCV, 2018:139-156. [22] KRIZHEVSKY A, SUTSKEVER I, HINTON G E.ImageNet classification with deep convolutional neural networks[J].Communications of the ACM, 2017, 60(6):84-90. [23] LIU Q.Research on approaches to opinion target extraction in opinion mining[D].Nanjing:Southeast University, 2016. [24] 陈晓美, 高铖, 关心惠.网络舆情观点提取的LDA主题模型方法[J].图书情报工作, 2015, 59(21):21-26. [25] LIN J, SUN X, MA S, et al.Global encoding for abstractive summarization[C]//Proceedings of the 56th annual meeting of the Association for Computational Linguistics (Volume 2:Short Papers).Melbourne:ACL, 2018:163-169. [26] YANG F,WANG W C,WANG F, et al.scBERT as a large-scale pretrained deep language model for cell type annotation of single-cell RNA-seq data[J].Nature machine intelligence, 2022, 4(10):852-866. [27] 温浩.科技文摘创新点语义识别与分类方法研究[J].情报学报, 2019, 38(3):249-256. [28] RIBEIOR R, MATOSD M D.Extractive summarization of broadcast news:comparing strategies for European Portuguese[C]//Proceedings of the international conference on text, speech and dialogue.Berlin:Springer, 2007, 4629:115-122. [29] CHENG G, LI X, YAN Y H.Using highway connections to enable deep small-footprint LSTM-RNNs for speech recognition[J].Chinese journal of electronics, 2019, 28(1):107-112. [30] 孙宝山, 谭浩.基于ALBERT-UniLM模型的文本自动摘要技术研究[J/OL].[2022-03-08].http://epub1.gou5juan.com/kcms/detail/11.2127.TP.20210802.0922.002.html. [31] SJOBERGH J.Older versions of the ROUGE eval summarization evaluation system were easier to fool[J].Information processing & management, 2007, 43(6):1500-1505. [32] GOODFELLOW I, BENGIO Y, COURVILLE A.Deep learning[J].Nature, 2016, 22(4):367-415. [33] ZHAO J, MAO X, CHEN L.Learning deep features to recognize speech emotion using merged deep CNN[J].IET signal processing, 2018, 12(6):713-721. [34] VASEANI A, SHAZEER N, PARMAR N, et al.Attention is all you need[EB/OL].[2022-08-20].https://dl.acm.org/doi/10.5555/3295222.3295349. |