Please use this identifier to cite or link to this item: http://dx.doi.org/10.25673/38473
Title: Towards speech-based interactive post hoc explanations in explainable AI
Author(s): Hillmann, Stefan
Möller, Sebastian
Michael, Thilo
Issue Date: 2021
Type: Conference object
Language: English
URN: urn:nbn:de:gbv:ma9:1-1981185920-387197
Subjects: XAI
Explainable AI
Post hoc explanations
Spoken dialog
Argumentation
Abstract: AI-based systems offer solutions for information extraction (e.g., finding information), information transformation (e.g., machine translation), classification (e.g., classifying news as fake or true), or decision support (e.g., providing diagnoses and treatment proposals for medical doctors) in many real-world applications. The solutions are based on machine leaning (ML) models and are commonly offered to a large and diverse group of users, some of them experts, many others naïve users from a large population. Nowadays and in particular deep neural network architectures are black-boxes for users and even developers [1, 4, 9] (also cp. [6]). A major goal of Explainable Artificial Intelligence (XAI) is making complex decision-making systems more trustworthy and accountable [7, p. 2]. That is why XAI seeks to ensure transparency, interpretability, and explainability [9]. Common to most users is that they are not able to understand the functioning of the AI-based systems, i.e., those are perceived as black-boxes. Humans are confronted with the results, but they cannot comprehend what information was used by the system for reaching this result (interpretability), and in which way this information was processed and weighted (transparency). The underlying reason is that an explicit functional description of the system is missing or even not possible in most Machine-Learning-(ML)-based AI systems – the function is trained by adjusting the internal parameters, and sometimes also the architecture is learned from a basic set of standard architectures. However, natural language and speech-based explanations allow better explainability [1, p. 11] due to interactive post hoc explanations in from of an informed dialog [7, p. 2]. Additionally, AI is also addressed by regulations, e.g., of the EU [2, 3], and thus becomes even more relevant for industry and research. Here, not at least recognition of bias in AI systems’ decision plays an important role.
URI: https://opendata.uni-halle.de//handle/1981185920/38719
http://dx.doi.org/10.25673/38473
Open Access: Open access publication
License: (CC BY-SA 4.0) Creative Commons Attribution ShareAlike 4.0(CC BY-SA 4.0) Creative Commons Attribution ShareAlike 4.0
Appears in Collections:Fakultät für Elektrotechnik und Informationstechnik (OA)

Files in This Item:
File Description SizeFormat 
AI-Debate2021_Hillmann et al._Final.pdfArtikel in Kongreßband435.25 kBAdobe PDFThumbnail
View/Open