[1] KLEIN G. A naturalistic decision making perspective on studying intuitive decision making[J]. Journal of applied research in memory and cognition, 2015, 4(3): 164-168.
[2] EDWARDS J S, DUAN Y, ROBINS P C. An analysis of expert systems for business decision making at different levels and in different roles[J]. European journal of information systems, 2000, 9(1): 36-46.
[3] 刘霞. “ 决策智能” 成数字化转型新趋势[EB/OL]. [2024-10-20]. http://digitalpaper.stdaily.com/http_www.kjrb.com/kjrb/html/2022-09/30/content_542430.htm?div=-1.(LIU X. "Decision intelligence" as a new trend in digital transformation[EB/OL]. [2024-10-20]. http://digitalpaper.stdaily.com/http_www.kjrb.com/kjrb/html/2022-09/30/content_542430.htm?div=-1.)
[4] DIETVORST B J, BHARTI S. People reject algorithms in uncertain decision domains because they have diminishing sensitivity to forecasting error[J]. Psychological science, 2020, 31(10): 1302-1314.
[5] GUNNING D, VORM E, WANG Y, et al. DARPA’s explainable AI (XAI) program: A retrospective[J]. Authorea preprints, 2021, 2(4): e61.
[6] DONG J, CHEN S, MIRALINAGHI M, et al. Why did the AI make that decision? Towards an explainable artificial intelligence (XAI) for autonomous driving systems[J]. Transportation research part C: emerging technologies, 2023, 156: 104358.
[7] ARRIETA A B, DIAZ-RODRIGUEZ N, DEL SER J, et al. Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI[J]. Information fusion, 2020, 58: 82-115.
[8] 孔祥维, 唐鑫泽, 王子明. 人工智能决策可解释性的研究综述 [J]. 系统工程理论与实践, 2021, 41(2): 524-536. (KONG X W, TANG X Z, WANG Z M. A survey of explainable artificial intelligence decision[J]. Systems engineering-theory & practice, 2021, 41(2): 524-536.)
[9] SUFFIAN M, GRAZIANI P, ALONSO J M, et al. FCE: feedback based counterfactual explanations for explainable AI[J]. IEEE access, 2022, 10: 72363-72372.
[10] 吴丹, 孙国烨. 迈向可解释的交互式人工智能:动因、途径及研究趋势[J]. 武汉大学学报(哲学社会科学版), 2021,74(5): 16-28. (WU D, SUN G Y. Towards explainable interactive artificial intelligence: motivations, approaches, and research trends[J]. Wuhan University journal (philosophy & social science), 2021, 74(5): 16-28.)
[11] CABITZA F, CAMPAGNER A, NATALI C, et al. Painting the black box white: experimental findings from applying XAI to an ECG reading setting[J]. Machine learning and knowledge extraction, 2023, 5(1): 269-286.
[12] NAGAHISARCHOGHAEI M, NUR N, CUMMINS L, et al. An empirical survey on explainable ai technologies: recent trends, use-cases, and categories from technical and application perspectives[J]. Electronics, 2023, 12(5): 1092.
[13] MILLER T. Explanation in artificial intelligence: insights from the social sciences[J]. Artificial intelligence, 2019, 267: 1-38.
[14] SHNEIDERMAN B. Human-centered artificial intelligence: reliable, safe & trustworthy[J]. International journal of human–computer interaction, 2020, 36(6): 495-504.
[15] LIPTON Z C. The mythos of model interpretability[J]. Queue, 2018, 16(3): 31-57.
[16] 王冬丽, 杨珊, 欧阳万里, 等. 人工智能可解释性: 发展与应用 [J]. 计算机科学, 2023, 50(S1): 19-25. (WANG D L, YANG S, OUYANG W L, et al. Explainability of artificial intelligence: development and application[J]. Computer science, 2023, 50(S1): 19-25.)
[17] VILONE G, LONGO L, Notions of explainability and evaluation approaches for explainable artificial intelligence[J]. Information fusion, 2021, 76: 89-106.
[18] DE BRUIJN H, WARNIER M, JANSSEN M. The perils and pitfalls of explainable AI: strategies for explaining algorithmic decision-making[J]. Government information quarterly, 2022, 39(2): 101666.
[19] CABITZA F, CAMPAGNER A, MALGIERI G, et al. Quod erat demonstrandum? Towards a typology of the concept of explanation for the design of explainable AI[J]. Expert systems with applications, 2023, 213(PA): 118888.
[20] GACZEK P, LESZCZYNSKI G, MOUAKHER A. Collaboration with machines in B2B marketing: overcoming managers' aversion to AI-CRM with explainability[J]. Industrial marketing management, 2023, 115(10): 127-142.
[21] BEHERA R K, BALA P K, RANA N P. Creation of sustainable growth with explainable artificial intelligence: an empirical insight from consumer packaged goods retailers[J]. Journal of cleaner production, 2023, 399: 136605.
[22] LEICHTMANN B, HINTERREITER A, HUMER C, et al. Explainable artificial intelligence improves human decisionmaking: results from a mushroom picking experiment at a public art festival[J]. International journal of human–computer interaction, 2024, 40(17): 4787-4804.
[23] CHAN G K Y. AI employment decision-making: integrating the equal opportunity merit principle and explainable AI[J]. AI & society, 2024, 39(3): 1027-1038.
[24] ALUFAISAN Y, MARUSICH L R, BAKDASH J Z, et al. Does explainable artificial intelligence improve human decisionmaking?[C]//Proceedings of the AAAI conference on artificial intelligence. Palo Alto: AAAI Press, 2021, 35(8): 6618-6626.
[25] POURSABZI-SANGDEH F, GOLDSTEIN D G, HOFMAN J M, et al. Manipulating and measuring model interpretability[C]//Proceedings of the 2021 CHI conference on human factors incomputing systems. New York: Association for Computing Machinery, 2021: 1-52.
[26] SCHEMMER M, HEMMER P, NITSCHE M, et al. A metaanalysis of the utility of explainable artificial intelligence in human-AI decision-making[C]//Proceedings of the 2022 AAAI/ACM conference on AI, ethics, and society. New York: Association for Computing Machinery, 2022: 617-626.
[27] GHASSEMI M, OAKDEN-RAYNER L, BEAM A L. The false hope of current approaches to explainable artificial intelligence in health care[J]. The lancet digital health, 2021, 3(11): e745-e750.
[28] NAISEH M, CEMILOGLU D, AL THANI D, et al. Explainable recommendations and calibrated trust: two systematic user errors[J]. Computer, 2021, 54(10): 28-37.
[29] JANSSEN M, HARTOG M, MATHEUS R, et al. Will algorithms blind people? The effect of explainable AI and decision-makers’ experience on AI-supported decision-making in government[J]. Social sciencecomputer review, 2022, 40(2): 478-493.
[30] AULETTA F, KALLEN R W, DI BERNARDO M, et al. Predicting and understanding human action decisions during skillful joint-action using supervised machine learning and explainable-AI[J]. Scientific reports, 2023, 13(1): 4992.
[31] EBERMANN C, SELISKY M, WEIBELZAHL S. Explainable AI: the effect of contradictory decisions and explanations on users’ acceptance of AI systems[J]. International journal of human–computer interaction, 2023, 39(9): 1807-1826.
[32] DU Y, ANTONIADI A M, MCNESTRY C, et al. The role of XAI in advice-taking from a clinical decision support system: acomparative user study of feature contribution-based and example-based explanations[J]. Applied sciences, 2022, 12(20): 10323.
[33] ŽLAHTIČ B, ZAVRŠNIK J, BLAŽUN VOŠNER H, et al. Agile machine learning model development using data canyons in medicine: a step towards explainable artificial intelligence and flexible expert-based model improvement[J]. Applied sciences,2023, 13(14): 8329.
[34] DIKMEN M, BURNS C. The effects of domain knowledge on trust in explainable AI and task performance: a case of peer-topeer lending[J]. International journal of human-computer studies, 2022, 162: 102792.
[35] MIRÓ-NICOLAU M, MOYÀ-ALCOVER G, JAUME-I-CAPÓ A. Evaluating explainable artificial intelligence for X-ray image analysis[J]. Applied sciences, 2022, 12(9): 4459.
[36] KOLAJO T, DARAMOLA O. Human-centric and semanticsbased explainable event detection: a survey[J]. Artificial intelligence review, 2023, 56(1): 119-158.
[37] ALICIOGLU G, SUN B. A survey of visual analytics for explainable artificial intelligence methods[J]. Computers & graphics, 2022, 102: 502-520.
[38] SCHOONDERWOERD T A J, JORRITSMA W, NEERINCX M A, et al. Human-centered XAI: developing design patterns for explanations of clinical decision support systems[J]. International journal of human-computer studies, 2021, 154: 102684.
[39] APOSTOLOPOULOS I D, GROUMPOS P P. Fuzzy cognitive maps: their role in explainable artificial intelligence[J]. Applied sciences, 2023, 13(6): 3412.
[40] MISITANO G, AFSAR B, LÁRRAGA G, et al. Towards explainable interactive multiobjective optimization: R-XIMO[J]. Autonomous agents and multi-agent systems, 2022, 36(2): 43.
[41] ALKHALAF S, ALTURISE F, BAHADDAD A A, et al. Adaptive aquila optimizer with explainable artificial intelligenceenabled cancer diagnosis on medical imaging[J]. Cancers, 2023, 15(5): 1492.
[42] METTA C, BERETTA A, GUIDOTTI R, et al. Improving trust and confidence in medical skin lesion diagnosis through explainable deep learning[J]. International journal of data science and analytics, 2023, 16(1): 1-13.
[43] GOODELL J W, JABEUR S B, SAÂDAOUI F, et al. Explainable artificial intelligence modeling to forecast bitcoin prices[J]. International review of financial analysis, 2023, 88: 102702.
[44] LEEM S, OH J, SO D, et al. Towards data-driven decisionmaking in the korean film industry: an XAI model for box office analysis using dimension reduction, clustering, and classification[J]. Entropy, 2023, 25(4): 571.
[45] SHAJALAL M, BODEN A, STEVENS G. Explainable product backorder prediction exploiting CNN: introducing explainable models in businesses[J]. Electronic markets, 2022, 32(4): 2107-2122.
[46] SHAMS M Y, GAMEL S A, TALAAT F M. Enhancing crop recommendation systems with explainable artificial intelligence: a study on agricultural decision-making[J]. Neuralcomputing and applications, 2024, 36(11): 5695-5714.
[47] SPEITH T. A review of taxonomies of explainable artificial intelligence (XAI) methods[C]//Proceedings of the 2022 ACM conference on fairness, accountability, and transparency. New York: Association for Computing Machinery, 2022: 2239-2250.
[48] ISLAM M R, AHMED M U, BARUA S, et al. A systematic review of explainable artificial intelligence in terms of different application domains and tasks[J]. Applied sciences, 2022, 12(3): 1353.
[49] RIBEIRO M T, SINGH S, GUESTRIN C. "Why should i trust you?" Explaining the predictions of any classifier[C]//Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. New York: Association for Computing Machinery, 2016: 1135-1144.
[50] LUNDBERG S M, LEE S I. A unified approach to interpreting model predictions[J]. Advances in neural information processing systems, 2017, 30: 4765-4774.
[51] ALBAHRI A S, JOUDAR S S, HAMID R A, et al. Explainable artificial intelligence multimodal of autism triage levels using fuzzy approach-based multi-criteria decision-making and LIME[J]. International journal of fuzzy systems, 2024, 26(1): 274-303.
[52] POLI J P, OUERDANE W, PIERRARD R. Generation of textual explanations in XAI: the case of semantic annotation[C]//2021 IEEE International conference on fuzzy systems (FUZZ-IEEE). Piscataway: IEEE, 2021: 1-6.
[53] LIPTON Z C. The mythos of model interpretability: in machine learning, the concept of interpretability is both important and slippery[J]. Queue, 2018, 16(3): 31-57.
[54] KOUKI P, SCHAFFER J, PUJARA J, et al. Generating and understanding personalized explanations in hybrid recommender systems[J]. ACM transactions on interactive intelligent systems (TiiS), 2020, 10(4): 1-40.
[55] SACHAN S, YANG J B, XU D L, et al. An explainable AI decision-support-system to automate loan underwriting[J]. Expert systems with applications, 2020, 144(C): 113100.
[56] FENG Y, HUA W, SUN Y. Nle-dm: natural-language explanations for decision making of autonomous driving based on semantic scene understanding[J]. IEEE transactions on intelligent transportation systems, 2023, 24(9): 9780-9791.
[57] DAZELEY R, VAMPLEW P, FOALE C, et al. Levels of explainable artificial intelligence for human-aligned conversational explanations[J]. Artificial intelligence, 2021, 299: 103525.
[58] MARIOTTI E, ALONSO J M, GATT A. Towards harnessing natural language generation to explain black-box models[C]//2nd Workshop on interactive natural language technology for explainable artificial intelligence. Dublin: Association for Computational Linguistics, 2020: 22-27.
[59] ISLAM M R, AHMED M U, BARUA S, et al. A systematic review of explainable artificial intelligence in terms of different application domains and tasks[J]. Applied sciences, 2022, 12(3): 1353.
[60] SELVARAJU R R, COGSWELL M, DAS A, et al. Grad-cam: visual explanations from deep networks via gradient-based localization[C]//Proceedings of the IEEE international conference oncomputer vision. Piscataway: IEEE, 2017: 618-626.
[61] ADARSH V, GANGADHARAN G R. Applying explainable artificial intelligence models for understanding depression among IT workers[J]. IT professional, 2022, 24(5): 25-29.
[62] ZHOU B, KHOSLA A, LAPEDRIZA A, et al. Learning deep features for discriminative localization[C]//Proceedings of the IEEE conference oncomputer vision and pattern recognition. Piscataway: IEEE, 2016: 2921-2929.
[63] YIĞIT T, ŞENGÖZ N, ÖZMEN Ö, et al. Diagnosis of paratuberculosis in histopathological images based on explainable artificial intelligence and deep learning[J]. Traitement du signal, 2022, 39(3): 863.
[64] BENNETOT A, LAURENT J L, CHATILA R, et al. Towards explainable neural-symbolic visual reasoning[EB/OL]. [2024-10-20]. https://ar5iv.labs.arxiv.org/html/1909.09065
[65] PETRAUSKAS V, JASINEVICIUS R, DAMULEVICIENE G, et al. Explainable artificial intelligence-based decision support system for assessing the nutrition-related geriatric syndromes[J]. Applied sciences, 2021, 11(24): 11763.
[66] ALIYEVA K, MEHDIYEV N. Uncertainty-aware multicriteria decision analysis for evaluation of explainable artificial intelligence methods: a use case from the healthcare domain[J]. Information sciences, 2024, 657: 119987.
[67] VILONE G, LONGO L. Notions of explainability and evaluation approaches for explainable artificial intelligence[J]. Information fusion, 2021, 76: 89-106.
[68] NAUTA M, TRIENES J, PATHAK S, et al. From anecdotal evidence to quantitative evaluation methods: a systematic review on evaluating explainable ai[J]. ACMcomputing surveys, 2023, 55(13): 1-42.
[69] LUO D, CHENG W, XU D, et al. Parameterized explainer for graph neural network[J]. Advances in neural information processing systems, 2020, 33: 19620-19631.
[70] LAI W, GUO L, BIALKOWSKI K, et al. An explainable deep learning method for microwave head stroke localization[J]. IEEE Journal of electromagnetics, RF and microwaves in medicine and biology, 2023, 7(4): 1-8.
[71] ZHANG H, CHEN J, XUE H, et al. Towards a unified evaluation of explanation methods without ground truth[EB/OL]. [2024-10-20]. https://ar5iv.labs.arxiv.org/html/1911.09017.
[72] FOLKE T, YANG S C H, ANDERSON S, et al. Explainable AI for medical imaging: explaining pneumothorax diagnoses with bayesian teaching[C]//Artificial intelligence and machine learning for multi-domain operations applications III. San Francisco: SPIE, 2021, 11746: 644-664.
[73] ALVAREZ MELIS D, JAAKKOLA T. Towards robust interpretability with self-explaining neural networks[J]. Advances in neural information processing systems, 2018, 31.
[74] MEHTA H, PASSI K. Social media hate speech detection using explainable artificial intelligence (XAI)[J]. Algorithms, 2022, 15(8): 291.
[75] ALIYEVA K, MEHDIYEV N. Uncertainty-aware multicriteria decision analysis for evaluation of explainable artificial intelligence methods: a use case from the healthcare domain[J]. Information sciences, 2024, 657: 119987.
[76] HOFFMAN R, MUELLER S, KLEIN G, et al. Measuring trust in the XAI context[EB/OL]. [2024-10-20]. https://osf.io/preprints/psyarxiv/e3kv9.
[77] KNAPIČ S, MALHI A, SALUJA R, et al. Explainable artificial intelligence for human decision support system in the medical domain[J]. Machine learning and knowledge extraction, 2021, 3(3): 740-770.
[78] ZHOU J, GANDOMI A H, CHEN F, et al. Evaluating the quality of machine learning explanations: a survey on methods and metrics[J]. Electronics, 2021, 10(5): 593.
[79] CAHOUR B, FORZY J F. Does projection into use improve trust and exploration? An example with a cruise control system[J]. Safety science, 2009, 47(9): 1260-1270.
[80] KÖRBER M. Theoretical considerations and development of a questionnaire to measure trust in automation[C]//Proceedings of the 20th congress of the international ergonomics association (IEA 2018) Volume VI: Transport ergonomics and human factors (TEHF), aerospace human factors and ergonomics 20. Berlin: Springer international, 2019: 13-30.
[81] GULATI S, SOUSA S, LAMAS D. Design, development and evaluation of a human-computer trust scale[J]. Behaviour & information technology, 2019, 38(10): 1004-1015.
[82] HOFEDITZ L, CLAUSEN S, RIEß A, et al. Applying XAI to an AI-based system for candidate management to mitigate bias and discrimination in hiring[J]. Electronic markets, 2022, 32(4): 2207-2233.
[83] GHASSEMI M, OAKDEN-RAYNER L, BEAM A L. The false hope of current approaches to explainable artificial intelligence in health care[J]. The lancet digital health, 2021, 3(11): e745-e750.
[84] ABDUL A, VERMEULEN J, WANG D, et al. Trends and trajectories for explainable, accountable and intelligible systems: an HCI research agenda[C]//Proceedings of the 2018 CHI conference on human factors incomputing systems. New York: Association for Computing Machinery, 2018: 1-18.
[85] HARPER R H R. The role of HCI in the age of AI[J]. International journal of human–computer interaction, 2019, 35(15): 1331-1344.
[86] NISSEN M E, SENGUPTA K. Incorporating software agents into supply chains: Experimental investigation with a procurement task[J]. MIS quarterly, 2006, 30(1): 145-166.
[87] LU J, LEE D, KIM T W, et al. Good explanation for algorithmic transparency[C]//Proceedings of the AAAI/ACM conference on AI, ethics, and society. New York: Association for Computing Machinery, 2020: 93-93.
[88] EL-ASSADY M, MORUZZI C. Which biases and reasoning pitfalls do explanations trigger? Decomposingcommunication processes in human–AI interaction[J]. IEEEcomputer graphics and applications, 2022, 42(6): 11-23.
[89] LIU J, SUN G, WU D. Heuristic intervention for algorithmic literacy: from the perspective of algorithmic awareness and knowledge[C]//International conference on information. Cham: Springer Nature, 2024: 248-258.
[90] DI MARTINO F, DELMASTRO F. Explainable AI for clinical and remote health applications: a survey on tabular and time series data[J]. Artificial intelligence review, 2023, 56(6): 5261-5315.