[目的/意义] 在进行大规模知识库构建时,基于手工方式的构建模式效率较低并且可行性较差,因此,从网络百科中自动地获取海量知识已经被越来越多的学者所关注。目前的研究主要关注于从英文网络百科数据源进行海量知识的抽取,而面向中文百科数据源进行的知识抽取研究工作尚处于起步阶段。[方法/过程] 为解决中文大规模知识库的构建问题,提出一种新的基于中文网络百科架构的大规模知识库的自动化构建方法:在第一阶段,对知识三元组中的主语和宾语之间的语义关系进行自扩展学习;在第二阶段,基于条件随机场和支持向量机协同分类器,对标注出的属性和属性值实体之间的语义关系进行预测。[结果/结论] 实验评测结果表明,该方法较前人工作在典型中文百科分类页面中的实体识别查准率和查全率分别最高有约10%和6%的提升。
[Purpose/significance] In the process of constructing large-scale knowledge base, the manual-based construction approach is lack of efficiency and flexibility. Automatically extracting of massive knowledge from online encyclopedia has attracted attention of an increasing number of scholars. Current research mainly focuses on extracting the data from English online encyclopedia, whereas research about knowledge extraction from Chinese or other languages' data sources is rare.[Method/process] This paper proposes an automatic construction scheme for large-scale knowledge base based on Chinese online Encyclopedia. (i)In the first stage of the scheme, self-expanded learning is performed on the semantic relations between subjects and objects among the knowledge triples. (ii)In the second stage, the semantic relationship between marked attributes and their entities is predicted based on Conditional Random Fields (CRFs) and Support Vector Machine (SVM) classifier.[Result/conclusion] A large-scale knowledge base is automatically constructed based on the scheme, and the experiment results indicate that the scheme possesses feasibility and effectiveness.
[1] BERNERS-LEE T,HENDLER J,LASSILA O.The semantic web[J].Scientific american,2001,284(5):28-37.
[2] BIZER C,HEATH T,BERNERS-LEE T.Linked data-the story so far[M]//Semantic services,interoperability and Web applications:emerging concepts.USA:Information Science Reference,2009:205-227.
[3] BIZER C,LEHMANN J,KOBILAROV G,et al.DBpedia-a crystallization point for the Web of data[J].Web Semantics:science,services and agents on the world wide web,2009,7(3):154-165.
[4] WANG Z,WANG Z,LI J,et al.Knowledge extraction from Chinese wiki encyclopedias[J].Journal of Zhejiang University SCIENCE C,2012,13(4):268-280.
[5] SUCHANEK F M,KASNECI G,WEIKUM G.Yago:A large ontology from wikipedia and wordnet[J].Web Semantics:Science,Services and Agents on the World Wide Web,2008,6(3):203-217.
[6] WU F,WELD D S.Automatically refining the wikipedia infobox ontology[C]//Proceedings of the 17th international conference on World Wide Web.New York:ACM,2008:635-644.
[7] WU F,WELD D S.Autonomously semantifying wikipedia[C]//Proceedings of the sixteenth ACM conference on Conference on information and knowledge management.New York:ACM,2007:41-50.
[8] 康为,穗志方.基于Web弱指导的本体概念实例及属性的同步提取[J].中文信息学报,2010,24(1):54-60.
[9] 郭剑毅,李真,余正涛,等.领域本体概念实例,属性和属性值的抽取及关系预测[J].南京大学学报:自然科学版,2012,48(4):383-389.
[10] CHEN Y,CHEN L,XU K.Learning Chinese entity attributes from online encyclopedia[C]//Asia-Pacific Web conference.Berlin:Springer,2012:179-186.
[11] 贾真,杨宇飞,何大可,等.面向中文网络百科的属性和属性值抽取[J].北京大学学报(自然科学版),2014,50(1):41-47.
[12] 刘倩,刘冰洋,贺敏,等.基于同义扩展的在线百科中实体属性抽取[J].中文信息学报,2016,30(1):16-24.
[13] LAFFERTY J,MCCALLUM A,PEREIRA F.Conditional random fields:probabilistic models for segmenting and labeling sequence data[C]//Proceedings of the eighteenth international conference on machine learning.San Francisco:ACM,2001:282-289.
[14] CORTES C,VAPNIK V.Support-vector networks[J].Machine learning,1995,20(3):273-297.
[15] 王汀,徐天晟,冀付军.基于数据场和全局序列比对的大规模中文关联数据模型[J].中文信息学报,2016,30(3):116-124.
[16] 梅家驹,竺一鸣,高蕴琦,等.同义词词林[M].上海:上海辞书出版社,1984.
[17] 哈工大社会计算与信息检索研究中心.同义词词林(扩展版)[EB/OL].[2016-01-05].http://ir.hit.edu.cn/demo/ltp/Sharing_Plan.htm.
[18] 刘文远,武丽霞,王宝文.基于优序图加权的多维稀疏模糊推理方法[J].计算机工程,2009,35(11):210-212.
[19] BAI L.Computer-assisted discovery on language knowledge[M].Beijing:Science Press,1995.
[20] CRF++[EB/OL].[2016-02-08].http://crfpp.sourceforge.net/.
[21] "国立"台湾大学.LibSVM[EB/OL].[2016-03-20].http://www.csie.ntu.edu.tw/~cjlin/libsvm/
[22] 中国科学院计算技术研究所.ICTCLAS[EB/OL].[2016-01-03].http://ictclas.org/.